Table of Contents

  • 0  Package Install
  • 1  Dataset: Iris (붓꽃 자료)
    • 1.1  Dataset with graphs
    • 1.2  Descriptive statistics of the datasetm
    • 1.3  Euclidean Distance of the dataset
  • 2  Weighted Maxcut
    • 2.1  Example: Fully Connected Graph with Randomized weight
    • 2.2  Brute Force Algorithms to Calculate Optimal Solution
  • 3  QAOA solving Weighted Maxcut
  • 4  Simulating with Different P-Values
  • 5  Clustering using QAOA-Weighted Maxcut with Iris Data
    • 5.1  Method 1: Brute Force Algorithms to Calculate Optimal Solution
    • 5.2  Method 2: QAOA
    • 5.3  Comparison with K-means Clustering
    • 5.4  Silhoutte Score
      • 5.4.1  Code with example
      • 5.4.2  Brute Force - Silhoutte Score
      • 5.4.3  QAOA - Silhoutte Score
      • 5.4.4  K-Means - Silhoutte Score
      • 5.4.5  True Label- Silhoutte Score
    • 5.5  Dunn Index
      • 5.5.1  Code with example
        • 5.5.1.1  클러스터 내 거리 측도 Intra-Cluster distance measure
        • 5.5.1.2  클러스터 간 거리 측도 Inter-Cluster distance measure
      • 5.5.2  Dunn Index 파이썬 구현
    • 5.6  Brute Force - Dunn Index
    • 5.7  QAOA - Dunn Index
    • 5.8  K-Means - Dunn Index
    • 5.9  True Label - Dunn Index

Package Install¶

In [1]:
# !pip install numpy
# !pip install pandas
# !pip install sklearn

# !pip install qiskit

Dataset: Iris (붓꽃 자료)¶

  • We then load the Iris data set. There is a bit of preprocessing to do in order to encode the inputs into the amplitudes of a quantum state. In the last preprocessing step, we translate the inputs x to rotation angles using the get_angles function we defined above.

Iris-Dataset-Classification.png

  • image source: https://www.embedded-robotics.com/iris-dataset-classification/

  • X variables: [Sepal length, Sepal width, Petal length, Petal width], (각 Sepal: 꽃받침, Petal: 꽃잎 의 가로, 세로 길이)

  • Y variable: Species of iris flowers (0:"setosa", 1:"versicolor", 2:"virginica")
  • We are trying to classify iris flowers to correct type of iris flowers using the lengths of various parts of the flower.
In [2]:
from sklearn.datasets import load_iris
import pandas as pd
import numpy as np

# visualization package
import matplotlib.pyplot as plt
import seaborn as sns

# sample data load
iris = load_iris()

# pring out data with variable types and its description
# print(iris)
In [3]:
# Description of the dataset
print(iris.DESCR)
.. _iris_dataset:

Iris plants dataset
--------------------

**Data Set Characteristics:**

    :Number of Instances: 150 (50 in each of three classes)
    :Number of Attributes: 4 numeric, predictive attributes and the class
    :Attribute Information:
        - sepal length in cm
        - sepal width in cm
        - petal length in cm
        - petal width in cm
        - class:
                - Iris-Setosa
                - Iris-Versicolour
                - Iris-Virginica
                
    :Summary Statistics:

    ============== ==== ==== ======= ===== ====================
                    Min  Max   Mean    SD   Class Correlation
    ============== ==== ==== ======= ===== ====================
    sepal length:   4.3  7.9   5.84   0.83    0.7826
    sepal width:    2.0  4.4   3.05   0.43   -0.4194
    petal length:   1.0  6.9   3.76   1.76    0.9490  (high!)
    petal width:    0.1  2.5   1.20   0.76    0.9565  (high!)
    ============== ==== ==== ======= ===== ====================

    :Missing Attribute Values: None
    :Class Distribution: 33.3% for each of 3 classes.
    :Creator: R.A. Fisher
    :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)
    :Date: July, 1988

The famous Iris database, first used by Sir R.A. Fisher. The dataset is taken
from Fisher's paper. Note that it's the same as in R, but not as in the UCI
Machine Learning Repository, which has two wrong data points.

This is perhaps the best known database to be found in the
pattern recognition literature.  Fisher's paper is a classic in the field and
is referenced frequently to this day.  (See Duda & Hart, for example.)  The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant.  One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.

.. topic:: References

   - Fisher, R.A. "The use of multiple measurements in taxonomic problems"
     Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
     Mathematical Statistics" (John Wiley, NY, 1950).
   - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.
     (Q327.D83) John Wiley & Sons.  ISBN 0-471-22361-1.  See page 218.
   - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
     Structure and Classification Rule for Recognition in Partially Exposed
     Environments".  IEEE Transactions on Pattern Analysis and Machine
     Intelligence, Vol. PAMI-2, No. 1, 67-71.
   - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule".  IEEE Transactions
     on Information Theory, May 1972, 431-433.
   - See also: 1988 MLC Proceedings, 54-64.  Cheeseman et al"s AUTOCLASS II
     conceptual clustering system finds 3 classes in the data.
   - Many, many more ...

Dataset with graphs¶

In [4]:
# feature_names(x variables) 와 target(y variable)을 잘 나타내도록 data frame 생성
data_iris = pd.DataFrame(data=iris.data, columns=iris.feature_names)
data_iris['species'] = iris.target

# Mapping the labels-'species' with numbers
data_iris['species'] = data_iris['species'].map(
    {0: "setosa", 1: "versicolor", 2: "virginica"})
print(data_iris)

# Plot scatter plots and density distribution plots feature-wise WITH labels
sns.set(font_scale=1.5)
sns.pairplot(data_iris, hue="species", height=3)
plt.show()
     sepal length (cm)  sepal width (cm)  petal length (cm)  petal width (cm)  \
0                  5.1               3.5                1.4               0.2   
1                  4.9               3.0                1.4               0.2   
2                  4.7               3.2                1.3               0.2   
3                  4.6               3.1                1.5               0.2   
4                  5.0               3.6                1.4               0.2   
..                 ...               ...                ...               ...   
145                6.7               3.0                5.2               2.3   
146                6.3               2.5                5.0               1.9   
147                6.5               3.0                5.2               2.0   
148                6.2               3.4                5.4               2.3   
149                5.9               3.0                5.1               1.8   

       species  
0       setosa  
1       setosa  
2       setosa  
3       setosa  
4       setosa  
..         ...  
145  virginica  
146  virginica  
147  virginica  
148  virginica  
149  virginica  

[150 rows x 5 columns]
In [5]:
# Plot scatter plots and density distribution plots feature-wise WITHOUT any labels
sns.set(font_scale=1.5)
sns.pairplot(data_iris, height=3)
plt.show()

Descriptive statistics of the datasetm¶

In [6]:
# Descriptive statistics of the dataset
data_iris.describe()
Out[6]:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)
count 150.000000 150.000000 150.000000 150.000000
mean 5.843333 3.057333 3.758000 1.199333
std 0.828066 0.435866 1.765298 0.762238
min 4.300000 2.000000 1.000000 0.100000
25% 5.100000 2.800000 1.600000 0.300000
50% 5.800000 3.000000 4.350000 1.300000
75% 6.400000 3.300000 5.100000 1.800000
max 7.900000 4.400000 6.900000 2.500000
In [7]:
data_iris['species'].value_counts()

# pd.crosstab(index=data_iris['species'], columns="count")
Out[7]:
setosa        50
versicolor    50
virginica     50
Name: species, dtype: int64

Euclidean Distance of the dataset¶

In [8]:
# Calculate the Euclidean distance of the data
iris_euclidean_dist = np.linalg.norm(data_iris.iloc[:, 0:4].values, axis=1)
iris_euclidean_dist
Out[8]:
array([ 6.34507683,  5.91692488,  5.83609458,  5.7497826 ,  6.32139225,
        6.88621812,  5.8966092 ,  6.23297682,  5.45618915,  5.98999165,
        6.71863081,  6.09918027,  5.83180932,  5.35817133,  7.14982517,
        7.36613874,  6.79852925,  6.34901567,  7.06470098,  6.54140658,
        6.60681466,  6.48922183,  5.92958683,  6.32771681,  6.18465844,
        6.04979338,  6.26737585,  6.44825558,  6.37181293,  5.91016074,
        5.93717104,  6.56734345,  6.79043445,  7.06328535,  5.99249531,
        6.05970296,  6.65056389,  6.2401923 ,  5.48543526,  6.31347765,
        6.24739946,  5.22685374,  5.59732079,  6.33798075,  6.64981203,
        5.83866423,  6.56124988,  5.77927331,  6.63852393,  6.15548536,
        9.12633552,  8.58487041,  9.13673902,  7.29588925,  8.5732141 ,
        7.89113427,  8.67352293,  6.45445583,  8.64985549,  7.17635005,
        6.5       ,  7.98122798,  7.60526134,  8.3468557 ,  7.37699126,
        8.70746806,  7.92842986,  7.6642025 ,  8.11048704,  7.35051019,
        8.44570897,  7.92085854,  8.49705831,  8.28130425,  8.33966426,
        8.59534758,  8.89269363,  9.04322951,  8.1798533 ,  7.24568837,
        7.18748913,  7.12039325,  7.58814865,  8.47702778,  7.78845299,
        8.38868285,  8.87918915,  8.12588457,  7.67202711,  7.36138574,
        7.60328876,  8.32646384,  7.60526134,  6.49461315,  7.61445993,
        7.78267306,  7.76079893,  8.18718511,  6.5169011 ,  7.67007171,
        9.63483264,  8.39940474,  9.93126377,  9.09395404,  9.47259204,
       10.71120908,  7.30753036, 10.22888068,  9.38189746, 10.40480658,
        9.08295106,  8.94147639,  9.48156105,  8.23043134,  8.55862138,
        9.19673855,  9.20543318, 11.11125555, 10.90642013,  8.2516665 ,
        9.77905926,  8.19817053, 10.77125805,  8.61568337,  9.62704524,
       10.06578363,  8.51821578,  8.57088093,  9.19619487,  9.85088828,
       10.16956243, 11.03675677,  9.21954446,  8.70574523,  8.79147314,
       10.52568288,  9.4005319 ,  9.16842407,  8.44274837,  9.52837867,
        9.57183368,  9.40850679,  8.39940474,  9.8275124 ,  9.72213968,
        9.28547252,  8.63423419,  9.07138358,  9.18966811,  8.54751426])
In [9]:
# Create new column of Euclidean distance
data_iris['Euclid_dist'] = iris_euclidean_dist
data_iris['Euclid_dist_sq'] = iris_euclidean_dist**2
In [10]:
# # Function that calculates Mahalanobis distance
# def mahalanobis(x=None, data=None, cov=None):
#     x_mu = x - x.mean()
#     if not cov:
#         cov = np.cov(data.values.T)
#     inv_covmat = np.linalg.inv(cov)
#     left = np.dot(x_mu, inv_covmat)
#     mahal = np.dot(left, x_mu.T)
#     return mahal.diagonal()
In [11]:
# # Calculate the Mahalanobis distance of the data
# Mahal_dist = mahalanobis(x=data_iris.iloc[:,range(4)], data=data_iris.iloc[:,range(4)])
# Mahal_dist
In [12]:
# # Create new column of Mahalanobis distance
# data_iris['Mahal_dist'] = Mahal_dist
# data_iris['Mahal_dist_sq'] = Mahal_dist**2
In [13]:
data_iris[['species', 'Euclid_dist', 'Euclid_dist_sq']]
# data_iris[['species', 'Euclid_dist','Euclid_dist_sq','Mahal_dist', 'Mahal_dist_sq']]
Out[13]:
species Euclid_dist Euclid_dist_sq
0 setosa 6.345077 40.26
1 setosa 5.916925 35.01
2 setosa 5.836095 34.06
3 setosa 5.749783 33.06
4 setosa 6.321392 39.96
... ... ... ...
145 virginica 9.285473 86.22
146 virginica 8.634234 74.55
147 virginica 9.071384 82.29
148 virginica 9.189668 84.45
149 virginica 8.547514 73.06

150 rows × 3 columns

In [14]:
# Plot scatter plots and density distribution plots feature-wise WITH labels
sns.set(font_scale=1.5)
sns.pairplot(data_iris, hue="species", height=3)
plt.show()
In [15]:
# Plot scatter plots and density distribution plots feature-wise WITHOUT any labels
sns.set(font_scale=1.5)
sns.pairplot(data_iris, height=3)
plt.show()
In [16]:
sns.pairplot(data_iris[['species', 'Euclid_dist',
             'Euclid_dist_sq']], hue="species", height=3)
# sns.pairplot(data_iris[['species', 'Euclid_dist', 'Euclid_dist_sq', 'Mahal_dist','Mahal_dist_sq']], hue="species", height=3)
Out[16]:
<seaborn.axisgrid.PairGrid at 0x1fc49d213a0>
In [17]:
sns.pairplot(data_iris[['species', 'Euclid_dist', 'Euclid_dist_sq']], height=3)
# sns.pairplot(data_iris[['species', 'Euclid_dist', 'Euclid_dist_sq', 'Mahal_dist','Mahal_dist_sq']], height=3)
Out[17]:
<seaborn.axisgrid.PairGrid at 0x1fc49bf7a30>
In [ ]:
 

Weighted Maxcut¶

Example: Fully Connected Graph with Randomized weight¶

In [18]:
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt


def draw_graph(G, col, pos):
    plt.figure(figsize=(12, 8))
    default_axes = plt.axes(frameon=True)
    nx.draw_networkx(G, node_color=col, node_size=600,
                     alpha=0.8, ax=default_axes, pos=pos, font_size=16)
    edge_labels = nx.get_edge_attributes(G, 'weight')
    nx.draw_networkx_edge_labels(
        G, pos=pos, edge_labels=edge_labels, font_size=16)


n = 6  # number of nodes in graph

np.random.seed(150)
edge_weights = np.random.randint(1, 5, size=(n, n))
edge_weights = edge_weights * edge_weights.T / 2

G = nx.Graph()
G.add_nodes_from(np.arange(0, n, 1))
for i in range(n):
    for j in range(n):
        if i > j:
            G.add_edge(i, j, weight=edge_weights[i, j])

colors = ['g' for node in G.nodes()]
pos = nx.spring_layout(G)
In [19]:
# graph G: nodes
G.nodes
Out[19]:
NodeView((0, 1, 2, 3, 4, 5))
In [20]:
# graph G: edges
G.edges
Out[20]:
EdgeView([(0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5)])
In [21]:
# graph G: edges with weights
G.edges.data()
Out[21]:
EdgeDataView([(0, 1, {'weight': 1.5}), (0, 2, {'weight': 6.0}), (0, 3, {'weight': 1.0}), (0, 4, {'weight': 0.5}), (0, 5, {'weight': 3.0}), (1, 2, {'weight': 6.0}), (1, 3, {'weight': 3.0}), (1, 4, {'weight': 2.0}), (1, 5, {'weight': 2.0}), (2, 3, {'weight': 0.5}), (2, 4, {'weight': 4.0}), (2, 5, {'weight': 4.0}), (3, 4, {'weight': 2.0}), (3, 5, {'weight': 4.0}), (4, 5, {'weight': 6.0})])
In [22]:
# Plot of the give graph G
draw_graph(G, colors, pos)
In [23]:
# Adjacency matrix of weighted graph
w = np.zeros([n, n])
for i in range(n):
    for j in range(n):
        temp = G.get_edge_data(i, j, default=0)
        if temp != 0:
            w[i, j] = temp['weight']

w
Out[23]:
array([[0. , 1.5, 6. , 1. , 0.5, 3. ],
       [1.5, 0. , 6. , 3. , 2. , 2. ],
       [6. , 6. , 0. , 0.5, 4. , 4. ],
       [1. , 3. , 0.5, 0. , 2. , 4. ],
       [0.5, 2. , 4. , 2. , 0. , 6. ],
       [3. , 2. , 4. , 4. , 6. , 0. ]])

Brute Force Algorithms to Calculate Optimal Solution¶

In [24]:
best_cost_brute = 0
for b in range(2**n):
    x = [int(t) for t in reversed(list(bin(b)[2:].zfill(n)))]
    cost = 0
    for i in range(n):
        for j in range(n):
            cost = cost + w[i, j] * x[i] * (1-x[j])
    if best_cost_brute < cost:
        best_cost_brute = cost
        xbest_brute = x
    print('case =', '%-20s' % str(x), ' cost =', '%-6s' %
          str(cost), ' try =', str(b+1))

colors_brute = ['g' if xbest_brute[i] == 0 else 'c' for i in range(n)]
print('\nBest case(solution) =', '%-20s' %
      str(xbest_brute), ' cost =', '%-6s' % str(best_cost_brute))
case = [0, 0, 0, 0, 0, 0]    cost = 0.0     try = 1
case = [1, 0, 0, 0, 0, 0]    cost = 12.0    try = 2
case = [0, 1, 0, 0, 0, 0]    cost = 14.5    try = 3
case = [1, 1, 0, 0, 0, 0]    cost = 23.5    try = 4
case = [0, 0, 1, 0, 0, 0]    cost = 20.5    try = 5
case = [1, 0, 1, 0, 0, 0]    cost = 20.5    try = 6
case = [0, 1, 1, 0, 0, 0]    cost = 23.0    try = 7
case = [1, 1, 1, 0, 0, 0]    cost = 20.0    try = 8
case = [0, 0, 0, 1, 0, 0]    cost = 10.5    try = 9
case = [1, 0, 0, 1, 0, 0]    cost = 20.5    try = 10
case = [0, 1, 0, 1, 0, 0]    cost = 19.0    try = 11
case = [1, 1, 0, 1, 0, 0]    cost = 26.0    try = 12
case = [0, 0, 1, 1, 0, 0]    cost = 30.0    try = 13
case = [1, 0, 1, 1, 0, 0]    cost = 28.0    try = 14
case = [0, 1, 1, 1, 0, 0]    cost = 26.5    try = 15
case = [1, 1, 1, 1, 0, 0]    cost = 21.5    try = 16
case = [0, 0, 0, 0, 1, 0]    cost = 14.5    try = 17
case = [1, 0, 0, 0, 1, 0]    cost = 25.5    try = 18
case = [0, 1, 0, 0, 1, 0]    cost = 25.0    try = 19
case = [1, 1, 0, 0, 1, 0]    cost = 33.0    try = 20
case = [0, 0, 1, 0, 1, 0]    cost = 27.0    try = 21
case = [1, 0, 1, 0, 1, 0]    cost = 26.0    try = 22
case = [0, 1, 1, 0, 1, 0]    cost = 25.5    try = 23
case = [1, 1, 1, 0, 1, 0]    cost = 21.5    try = 24
case = [0, 0, 0, 1, 1, 0]    cost = 21.0    try = 25
case = [1, 0, 0, 1, 1, 0]    cost = 30.0    try = 26
case = [0, 1, 0, 1, 1, 0]    cost = 25.5    try = 27
case = [1, 1, 0, 1, 1, 0]    cost = 31.5    try = 28
case = [0, 0, 1, 1, 1, 0]    cost = 32.5    try = 29
case = [1, 0, 1, 1, 1, 0]    cost = 29.5    try = 30
case = [0, 1, 1, 1, 1, 0]    cost = 25.0    try = 31
case = [1, 1, 1, 1, 1, 0]    cost = 19.0    try = 32
case = [0, 0, 0, 0, 0, 1]    cost = 19.0    try = 33
case = [1, 0, 0, 0, 0, 1]    cost = 25.0    try = 34
case = [0, 1, 0, 0, 0, 1]    cost = 29.5    try = 35
case = [1, 1, 0, 0, 0, 1]    cost = 32.5    try = 36
case = [0, 0, 1, 0, 0, 1]    cost = 31.5    try = 37
case = [1, 0, 1, 0, 0, 1]    cost = 25.5    try = 38
case = [0, 1, 1, 0, 0, 1]    cost = 30.0    try = 39
case = [1, 1, 1, 0, 0, 1]    cost = 21.0    try = 40
case = [0, 0, 0, 1, 0, 1]    cost = 21.5    try = 41
case = [1, 0, 0, 1, 0, 1]    cost = 25.5    try = 42
case = [0, 1, 0, 1, 0, 1]    cost = 26.0    try = 43
case = [1, 1, 0, 1, 0, 1]    cost = 27.0    try = 44
case = [0, 0, 1, 1, 0, 1]    cost = 33.0    try = 45
case = [1, 0, 1, 1, 0, 1]    cost = 25.0    try = 46
case = [0, 1, 1, 1, 0, 1]    cost = 25.5    try = 47
case = [1, 1, 1, 1, 0, 1]    cost = 14.5    try = 48
case = [0, 0, 0, 0, 1, 1]    cost = 21.5    try = 49
case = [1, 0, 0, 0, 1, 1]    cost = 26.5    try = 50
case = [0, 1, 0, 0, 1, 1]    cost = 28.0    try = 51
case = [1, 1, 0, 0, 1, 1]    cost = 30.0    try = 52
case = [0, 0, 1, 0, 1, 1]    cost = 26.0    try = 53
case = [1, 0, 1, 0, 1, 1]    cost = 19.0    try = 54
case = [0, 1, 1, 0, 1, 1]    cost = 20.5    try = 55
case = [1, 1, 1, 0, 1, 1]    cost = 10.5    try = 56
case = [0, 0, 0, 1, 1, 1]    cost = 20.0    try = 57
case = [1, 0, 0, 1, 1, 1]    cost = 23.0    try = 58
case = [0, 1, 0, 1, 1, 1]    cost = 20.5    try = 59
case = [1, 1, 0, 1, 1, 1]    cost = 20.5    try = 60
case = [0, 0, 1, 1, 1, 1]    cost = 23.5    try = 61
case = [1, 0, 1, 1, 1, 1]    cost = 14.5    try = 62
case = [0, 1, 1, 1, 1, 1]    cost = 12.0    try = 63
case = [1, 1, 1, 1, 1, 1]    cost = 0.0     try = 64

Best case(solution) = [1, 1, 0, 0, 1, 0]    cost = 33.0  
In [25]:
draw_graph(G, colors_brute, pos)

QAOA solving Weighted Maxcut¶

In [26]:
from qiskit import QuantumCircuit, Aer
from qiskit.circuit import Parameter


def maxcut_obj(solution, graph):
    obj = 0
    for i, j in graph.edges():
        if solution[i] != solution[j]:
            obj -= 1 * w[i][j]
    return obj  # cost function(hamiltonian)


def compute_expectation(counts, graph):
    avg = 0
    sum_count = 0
    for bit_string, count in counts.items():
        obj = maxcut_obj(bit_string, graph)
        avg += obj * count
        sum_count += count  # sum_count is shot
    return avg/sum_count  # minimize this function


def create_qaoa_circ(graph, theta):
    nqubits = len(graph.nodes())
    n_layers = len(theta)//2
    beta = theta[:n_layers]
    gamma = theta[n_layers:]

    qc = QuantumCircuit(nqubits)

    qc.h(range(nqubits))

    for layer_index in range(n_layers):
        for pair in list(graph.edges()):
            qc.rzz(2 * gamma[layer_index] * w[pair[0]]
                   [pair[1]], pair[0], pair[1])
        for qubit in range(nqubits):
            qc.rx(2 * beta[layer_index], qubit)

    qc.measure_all()
    return qc


def get_expectation(graph, shots=512):
    backend = Aer.get_backend('qasm_simulator')
    backend.shots = shots

    def execute_circ(theta):
        qc = create_qaoa_circ(graph, theta)
        counts = backend.run(qc, seed_simulator=10,
                             nshots=512).result().get_counts()
        return compute_expectation(counts, graph)

    return execute_circ
In [27]:
from scipy.optimize import minimize
expectation = get_expectation(G)
p = 1  # 32 64
res = minimize(expectation, np.ones(p*2)*np.pi/2,
               method='COBYLA', options={'maxiter': 2500})
In [28]:
res
Out[28]:
     fun: -24.17529296875
   maxcv: 0.0
 message: 'Optimization terminated successfully.'
    nfev: 31
  status: 1
 success: True
       x: array([2.57910398, 1.5991255 ])
In [29]:
from qiskit.visualization import plot_histogram
backend = Aer.get_backend('aer_simulator')
backend.shots = 2**14

qc_res = create_qaoa_circ(G, res.x)
counts = backend.run(qc_res, seed_simulator=10).result().get_counts()
plot_histogram(counts, figsize=(20, 5), title='512 Shots')
Out[29]:
In [30]:
# Plot the Quantum Circuit of QAOA
# qc_res.draw(output='mpl', plot_barriers=True).savefig('QAOA_circuit.png', dpi=720)
qc_res.draw(output='mpl', plot_barriers=True)
Out[30]:
In [31]:
# qc_res.draw(output='mpl', plot_barriers=True, fold=-1).savefig('QAOA_circuit(full).png', dpi=720)
qc_res.draw(output='mpl', plot_barriers=True, fold=-1)
Out[31]:
In [32]:
str(counts)
Out[32]:
"{'001001': 54, '100001': 23, '001010': 15, '111111': 3, '101001': 79, '111010': 82, '001000': 16, '101000': 30, '011011': 22, '011110': 27, '000001': 2, '110110': 67, '010111': 23, '101100': 1, '000011': 6, '011111': 8, '110010': 16, '101110': 2, '000101': 67, '010001': 5, '001110': 4, '111011': 21, '100101': 65, '010110': 75, '110011': 3, '000100': 16, '101101': 8, '000111': 8, '100100': 25, '111100': 5, '001101': 34, '010010': 9, '101111': 5, '000110': 4, '110001': 13, '011010': 62, '100000': 8, '000000': 6, '110111': 12, '001100': 2, '100011': 12, '100111': 13, '011000': 9, '111000': 6, '000010': 3, '110101': 16, '101011': 2, '111001': 10, '010000': 5, '001011': 1, '111110': 3, '010011': 4, '010100': 1, '011100': 5, '010101': 1}"
In [33]:
# Sort the counted shot results
{k: v for k, v in sorted(counts.items(), key=lambda item: item[1])}
Out[33]:
{'101100': 1,
 '001011': 1,
 '010100': 1,
 '010101': 1,
 '000001': 2,
 '101110': 2,
 '001100': 2,
 '101011': 2,
 '111111': 3,
 '110011': 3,
 '000010': 3,
 '111110': 3,
 '001110': 4,
 '000110': 4,
 '010011': 4,
 '010001': 5,
 '111100': 5,
 '101111': 5,
 '010000': 5,
 '011100': 5,
 '000011': 6,
 '000000': 6,
 '111000': 6,
 '011111': 8,
 '101101': 8,
 '000111': 8,
 '100000': 8,
 '010010': 9,
 '011000': 9,
 '111001': 10,
 '110111': 12,
 '100011': 12,
 '110001': 13,
 '100111': 13,
 '001010': 15,
 '001000': 16,
 '110010': 16,
 '000100': 16,
 '110101': 16,
 '111011': 21,
 '011011': 22,
 '100001': 23,
 '010111': 23,
 '100100': 25,
 '011110': 27,
 '101000': 30,
 '001101': 34,
 '001001': 54,
 '011010': 62,
 '100101': 65,
 '110110': 67,
 '000101': 67,
 '010110': 75,
 '101001': 79,
 '111010': 82}
In [34]:
result_col = list(map(int, list(max(counts, key=counts.get))))
result_colors = ['g' if result_col[i] == 0 else 'c' for i in range(n)]
In [35]:
# Result of Brute Force algorithm
draw_graph(G, colors_brute, pos)
In [36]:
# Result of QAOA
draw_graph(G, result_colors, pos)
In [37]:
print('xbest_brute :', xbest_brute)
print('QAOA        :', result_col)
xbest_brute : [1, 1, 0, 0, 1, 0]
QAOA        : [1, 1, 1, 0, 1, 0]
In [ ]:
 

Simulating with Different P-Values¶

  • p is just an iteration number, where we perform repetative works and trying to find the best solution out of iteratively executed results.
In [38]:
from tqdm import tqdm

p = 16
res = []
for i in tqdm(range(1, p+1)):
    res.append(minimize(expectation, np.ones(i*2)*np.pi/2,
               method='COBYLA', options={'maxiter': 2500}))
100%|██████████| 16/16 [00:20<00:00,  1.30s/it]
In [39]:
res[0:2]
Out[39]:
[     fun: -24.17529296875
    maxcv: 0.0
  message: 'Optimization terminated successfully.'
     nfev: 31
   status: 1
  success: True
        x: array([2.57910398, 1.5991255 ]),
      fun: -24.716796875
    maxcv: 0.0
  message: 'Optimization terminated successfully.'
     nfev: 54
   status: 1
  success: True
        x: array([2.52461869, 1.63875349, 1.61624568, 1.52923708])]
In [40]:
approx = []
for i in range(p):
    approx.append(-res[i].fun/best_cost_brute)

x = np.arange(1, p+1, 1)

plt.figure(figsize=(8, 6))
plt.plot(x, approx, marker='o', markersize=6, c='k', linestyle='--')
plt.ylim((0.5, 1))
plt.xlim(0, p)
plt.xlabel('iteration')
plt.ylabel('Approximation')
plt.grid(True)
plt.show()
In [41]:
best_p = np.argmax(approx)
print("The best p(iterationo number) is", best_p)
The best p(iterationo number) is 11
  • When p=25, cost hamiltonian is optimized. So we use the best optimized values to output our results.
In [42]:
res[best_p].x
Out[42]:
array([2.69931677, 1.70145888, 1.83501131, 1.52640593, 1.70235575,
       1.52533461, 1.53010659, 1.94876412, 1.40228443, 1.54544357,
       1.47815615, 1.48456692, 1.58893494, 1.57102304, 1.58103879,
       1.68907833, 1.40984573, 1.6220806 , 1.63468791, 1.62148146,
       1.52878007, 1.70463034, 1.58593247, 1.60855217])
In [43]:
from qiskit.visualization import plot_histogram
backend = Aer.get_backend('aer_simulator')
backend.shots = 512

qc_res = create_qaoa_circ(G, res[best_p].x)
counts = backend.run(qc_res, seed_simulator=10).result().get_counts()
plot_histogram(counts, figsize=(20, 5), title='512 Shots')
Out[43]:
In [44]:
result_col = list(map(int, list(max(counts, key=counts.get))))
result_colors = ['g' if result_col[i] == 0 else 'c' for i in range(n)]
In [45]:
print('xbest_brute :', xbest_brute)
print('QAOA        :', result_col)
xbest_brute : [1, 1, 0, 0, 1, 0]
QAOA        : [1, 1, 0, 0, 1, 0]
In [46]:
draw_graph(G, colors_brute, pos)
In [47]:
draw_graph(G, result_colors, pos)
In [48]:
print('Best solution - Brute Force : ' +
      str(xbest_brute) + ',  cost = ' + str(best_cost_brute))
print('Best solution - QAOA        : ' + str(result_col) +
      ',  cost = ' + str(-res[best_p].fun))
Best solution - Brute Force : [1, 1, 0, 0, 1, 0],  cost = 33.0
Best solution - QAOA        : [1, 1, 0, 0, 1, 0],  cost = 28.9130859375
In [ ]:
 

Clustering using QAOA-Weighted Maxcut with Iris Data¶

In [49]:
data_iris
Out[49]:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) species Euclid_dist Euclid_dist_sq
0 5.1 3.5 1.4 0.2 setosa 6.345077 40.26
1 4.9 3.0 1.4 0.2 setosa 5.916925 35.01
2 4.7 3.2 1.3 0.2 setosa 5.836095 34.06
3 4.6 3.1 1.5 0.2 setosa 5.749783 33.06
4 5.0 3.6 1.4 0.2 setosa 6.321392 39.96
... ... ... ... ... ... ... ...
145 6.7 3.0 5.2 2.3 virginica 9.285473 86.22
146 6.3 2.5 5.0 1.9 virginica 8.634234 74.55
147 6.5 3.0 5.2 2.0 virginica 9.071384 82.29
148 6.2 3.4 5.4 2.3 virginica 9.189668 84.45
149 5.9 3.0 5.1 1.8 virginica 8.547514 73.06

150 rows × 7 columns

  • Selected 3 datapoint of each from the different labels, total of 9 datapoints
In [50]:
num_sample1 = 2
num_sample2 = 6

sample_df1 = data_iris[data_iris['species'] ==
                       'setosa'].sample(num_sample1).sort_index()
# sample_df2 = data_iris[data_iris['species'] =='versicolor'].sample(num_sample).sort_index()
sample_df3 = data_iris[data_iris['species'] ==
                       'virginica'].sample(num_sample2).sort_index()

# data_iris_sample = pd.concat([sample_df1, sample_df2, sample_df3])
data_iris_sample = pd.concat([sample_df1, sample_df3])
data_iris_sample
Out[50]:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) species Euclid_dist Euclid_dist_sq
15 5.7 4.4 1.5 0.4 setosa 7.366139 54.26
27 5.2 3.5 1.5 0.2 setosa 6.448256 41.58
114 5.8 2.8 5.1 2.4 virginica 8.558621 73.25
117 7.7 3.8 6.7 2.2 virginica 11.111256 123.46
125 7.2 3.2 6.0 1.8 virginica 10.065784 101.32
126 6.2 2.8 4.8 1.8 virginica 8.518216 72.56
140 6.7 3.1 5.6 2.4 virginica 9.571834 91.62
141 6.9 3.1 5.1 2.3 virginica 9.408507 88.52
In [51]:
data_iris_qaoa = data_iris_sample[[
    'sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']]
data_iris_qaoa = np.array(data_iris_qaoa)
data_iris_qaoa_label = iris.target[data_iris_sample.index]
In [52]:
data_iris_qaoa
Out[52]:
array([[5.7, 4.4, 1.5, 0.4],
       [5.2, 3.5, 1.5, 0.2],
       [5.8, 2.8, 5.1, 2.4],
       [7.7, 3.8, 6.7, 2.2],
       [7.2, 3.2, 6. , 1.8],
       [6.2, 2.8, 4.8, 1.8],
       [6.7, 3.1, 5.6, 2.4],
       [6.9, 3.1, 5.1, 2.3]])
In [53]:
data_iris_qaoa_label
Out[53]:
array([0, 0, 2, 2, 2, 2, 2, 2])
In [54]:
len(data_iris_qaoa_label)
Out[54]:
8

Method 1: Brute Force Algorithms to Calculate Optimal Solution¶

In [55]:
# Function to calculate the distance between given two data points

import math


def dist(a, b):
    "Euclidean dist between two lists"
    d = np.linalg.norm(np.array(a) - np.array(b), axis=0)
    return round(d, 4)
In [56]:
import random

# Assign the number of nodes, edge connection, and its weight of the Graph.
n = len(data_iris_qaoa_label)
data = data_iris_qaoa
label = data_iris_qaoa_label

datapoints = data.tolist()
print("Data points:", datapoints)
labels = label
print("Data labels:", labels)

G = nx.Graph()
G.add_nodes_from(np.arange(0, n, 1))
for i in range(n):
    for j in range(n):
        if i > j:
            G.add_edge(i, j, weight=dist(datapoints[i], datapoints[j]))

colors = ['g' for node in G.nodes()]
pos = nx.spring_layout(G)
Data points: [[5.7, 4.4, 1.5, 0.4], [5.2, 3.5, 1.5, 0.2], [5.8, 2.8, 5.1, 2.4], [7.7, 3.8, 6.7, 2.2], [7.2, 3.2, 6.0, 1.8], [6.2, 2.8, 4.8, 1.8], [6.7, 3.1, 5.6, 2.4], [6.9, 3.1, 5.1, 2.3]]
Data labels: [0 0 2 2 2 2 2 2]
In [57]:
draw_graph(G, colors, pos)
In [58]:
# Calculate Adjacency matrix of the given Graph
w = np.zeros([n, n])
for i in range(n):
    for j in range(n):
        temp = G.get_edge_data(i, j, default=0)
        if temp != 0:
            w[i, j] = temp['weight']

w
Out[58]:
array([[0.    , 1.0488, 4.4193, 5.8856, 5.0892, 3.9573, 4.8477, 4.4385],
       [1.0488, 0.    , 4.3186, 6.1139, 5.1865, 3.8652, 4.9051, 4.5188],
       [4.4193, 4.3186, 0.    , 2.6851, 1.8138, 0.781 , 1.0724, 1.1446],
       [5.8856, 6.1139, 2.6851, 0.    , 1.1225, 2.6495, 1.6553, 1.9235],
       [5.0892, 5.1865, 1.8138, 1.1225, 0.    , 1.6125, 0.8832, 1.077 ],
       [3.9573, 3.8652, 0.781 , 2.6495, 1.6125, 0.    , 1.1576, 0.9592],
       [4.8477, 4.9051, 1.0724, 1.6553, 0.8832, 1.1576, 0.    , 0.5477],
       [4.4385, 4.5188, 1.1446, 1.9235, 1.077 , 0.9592, 0.5477, 0.    ]])
  • Brute Force Algorithm
In [59]:
best_cost_brute = 0
for b in range(2**n):
    x = [int(t) for t in reversed(list(bin(b)[2:].zfill(n)))]
    cost = 0
    for i in range(n):
        for j in range(n):
            cost = cost + w[i, j] * x[i] * (1-x[j])
    if best_cost_brute < cost:
        best_cost_brute = cost
        xbest_brute = x
    print('case =', '%-30s' % str(x), ' cost =', '%-24s' %
          str(cost), 'try =', str(b+1))

colors_brute = ['g' if xbest_brute[i] == 0 else 'c' for i in range(n)]
print('\nBest case(solution) =', '%-30s' %
      str(xbest_brute), ' cost =', '%-24s' % str(best_cost_brute))
case = [0, 0, 0, 0, 0, 0, 0, 0]        cost = 0.0                      try = 1
case = [1, 0, 0, 0, 0, 0, 0, 0]        cost = 29.686400000000003       try = 2
case = [0, 1, 0, 0, 0, 0, 0, 0]        cost = 29.9569                  try = 3
case = [1, 1, 0, 0, 0, 0, 0, 0]        cost = 57.5457                  try = 4
case = [0, 0, 1, 0, 0, 0, 0, 0]        cost = 16.2348                  try = 5
case = [1, 0, 1, 0, 0, 0, 0, 0]        cost = 37.0826                  try = 6
case = [0, 1, 1, 0, 0, 0, 0, 0]        cost = 37.5545                  try = 7
case = [1, 1, 1, 0, 0, 0, 0, 0]        cost = 56.3047                  try = 8
case = [0, 0, 0, 1, 0, 0, 0, 0]        cost = 22.035400000000003       try = 9
case = [1, 0, 0, 1, 0, 0, 0, 0]        cost = 39.9506                  try = 10
case = [0, 1, 0, 1, 0, 0, 0, 0]        cost = 39.7645                  try = 11
case = [1, 1, 0, 1, 0, 0, 0, 0]        cost = 55.5821                  try = 12
case = [0, 0, 1, 1, 0, 0, 0, 0]        cost = 32.9                     try = 13
case = [1, 0, 1, 1, 0, 0, 0, 0]        cost = 41.9766                  try = 14
case = [0, 1, 1, 1, 0, 0, 0, 0]        cost = 41.9919                  try = 15
case = [1, 1, 1, 1, 0, 0, 0, 0]        cost = 48.97089999999999        try = 16
case = [0, 0, 0, 0, 1, 0, 0, 0]        cost = 16.7847                  try = 17
case = [1, 0, 0, 0, 1, 0, 0, 0]        cost = 36.292699999999996       try = 18
case = [0, 1, 0, 0, 1, 0, 0, 0]        cost = 36.3686                  try = 19
case = [1, 1, 0, 0, 1, 0, 0, 0]        cost = 53.778999999999996       try = 20
case = [0, 0, 1, 0, 1, 0, 0, 0]        cost = 29.3919                  try = 21
case = [1, 0, 1, 0, 1, 0, 0, 0]        cost = 40.061299999999996       try = 22
case = [0, 1, 1, 0, 1, 0, 0, 0]        cost = 40.3386                  try = 23
case = [1, 1, 1, 0, 1, 0, 0, 0]        cost = 48.910399999999996       try = 24
case = [0, 0, 0, 1, 1, 0, 0, 0]        cost = 36.5751                  try = 25
case = [1, 0, 0, 1, 1, 0, 0, 0]        cost = 44.3119                  try = 26
case = [0, 1, 0, 1, 1, 0, 0, 0]        cost = 43.93119999999999        try = 27
case = [1, 1, 0, 1, 1, 0, 0, 0]        cost = 49.5704                  try = 28
case = [0, 0, 1, 1, 1, 0, 0, 0]        cost = 43.8121                  try = 29
case = [1, 0, 1, 1, 1, 0, 0, 0]        cost = 42.7103                  try = 30
case = [0, 1, 1, 1, 1, 0, 0, 0]        cost = 42.53099999999999        try = 31
case = [1, 1, 1, 1, 1, 0, 0, 0]        cost = 39.331599999999995       try = 32
case = [0, 0, 0, 0, 0, 1, 0, 0]        cost = 14.982300000000002       try = 33
case = [1, 0, 0, 0, 0, 1, 0, 0]        cost = 36.75410000000001        try = 34
case = [0, 1, 0, 0, 0, 1, 0, 0]        cost = 37.208800000000004       try = 35
case = [1, 1, 0, 0, 0, 1, 0, 0]        cost = 56.883                   try = 36
case = [0, 0, 1, 0, 0, 1, 0, 0]        cost = 29.6551                  try = 37
case = [1, 0, 1, 0, 0, 1, 0, 0]        cost = 42.588300000000004       try = 38
case = [0, 1, 1, 0, 0, 1, 0, 0]        cost = 43.244400000000006       try = 39
case = [1, 1, 1, 0, 0, 1, 0, 0]        cost = 54.08                    try = 40
case = [0, 0, 0, 1, 0, 1, 0, 0]        cost = 31.718700000000002       try = 41
case = [1, 0, 0, 1, 0, 1, 0, 0]        cost = 41.7193                  try = 42
case = [0, 1, 0, 1, 0, 1, 0, 0]        cost = 41.7174                  try = 43
case = [1, 1, 0, 1, 0, 1, 0, 0]        cost = 49.62039999999999        try = 44
case = [0, 0, 1, 1, 0, 1, 0, 0]        cost = 41.021300000000004       try = 45
case = [1, 0, 1, 1, 0, 1, 0, 0]        cost = 42.183299999999996       try = 46
case = [0, 1, 1, 1, 0, 1, 0, 0]        cost = 42.382799999999996       try = 47
case = [1, 1, 1, 1, 0, 1, 0, 0]        cost = 41.447199999999995       try = 48
case = [0, 0, 0, 0, 1, 1, 0, 0]        cost = 28.541999999999998       try = 49
case = [1, 0, 0, 0, 1, 1, 0, 0]        cost = 40.135400000000004       try = 50
case = [0, 1, 0, 0, 1, 1, 0, 0]        cost = 40.3955                  try = 51
case = [1, 1, 0, 0, 1, 1, 0, 0]        cost = 49.89130000000001        try = 52
case = [0, 0, 1, 0, 1, 1, 0, 0]        cost = 39.5872                  try = 53
case = [1, 0, 1, 0, 1, 1, 0, 0]        cost = 42.342000000000006       try = 54
case = [0, 1, 1, 0, 1, 1, 0, 0]        cost = 42.80350000000001        try = 55
case = [1, 1, 1, 0, 1, 1, 0, 0]        cost = 43.46070000000001        try = 56
case = [0, 0, 0, 1, 1, 1, 0, 0]        cost = 43.0334                  try = 57
case = [1, 0, 0, 1, 1, 1, 0, 0]        cost = 42.85560000000001        try = 58
case = [0, 1, 0, 1, 1, 1, 0, 0]        cost = 42.65910000000001        try = 59
case = [1, 1, 0, 1, 1, 1, 0, 0]        cost = 40.383700000000005       try = 60
case = [0, 0, 1, 1, 1, 1, 0, 0]        cost = 48.708400000000005       try = 61
case = [1, 0, 1, 1, 1, 1, 0, 0]        cost = 39.69200000000001        try = 62
case = [0, 1, 1, 1, 1, 1, 0, 0]        cost = 39.69690000000001        try = 63
case = [1, 1, 1, 1, 1, 1, 0, 0]        cost = 28.582899999999995       try = 64
case = [0, 0, 0, 0, 0, 0, 1, 0]        cost = 15.069000000000003       try = 65
case = [1, 0, 0, 0, 0, 0, 1, 0]        cost = 35.06                    try = 66
case = [0, 1, 0, 0, 0, 0, 1, 0]        cost = 35.2157                  try = 67
case = [1, 1, 0, 0, 0, 0, 1, 0]        cost = 53.109100000000005       try = 68
case = [0, 0, 1, 0, 0, 0, 1, 0]        cost = 29.159                   try = 69
case = [1, 0, 1, 0, 0, 0, 1, 0]        cost = 40.3114                  try = 70
case = [0, 1, 1, 0, 0, 0, 1, 0]        cost = 40.6685                  try = 71
case = [1, 1, 1, 0, 0, 0, 1, 0]        cost = 49.723299999999995       try = 72
case = [0, 0, 0, 1, 0, 0, 1, 0]        cost = 33.793800000000005       try = 73
case = [1, 0, 0, 1, 0, 0, 1, 0]        cost = 42.0136                  try = 74
case = [0, 1, 0, 1, 0, 0, 1, 0]        cost = 41.712700000000005       try = 75
case = [1, 1, 0, 1, 0, 0, 1, 0]        cost = 47.834900000000005       try = 76
case = [0, 0, 1, 1, 0, 0, 1, 0]        cost = 42.513600000000004       try = 77
case = [1, 0, 1, 1, 0, 0, 1, 0]        cost = 41.8948                  try = 78
case = [0, 1, 1, 1, 0, 0, 1, 0]        cost = 41.795300000000005       try = 79
case = [1, 1, 1, 1, 0, 0, 1, 0]        cost = 39.0789                  try = 80
case = [0, 0, 0, 0, 1, 0, 1, 0]        cost = 30.0873                  try = 81
case = [1, 0, 0, 0, 1, 0, 1, 0]        cost = 39.899899999999995       try = 82
case = [0, 1, 0, 0, 1, 0, 1, 0]        cost = 39.861                   try = 83
case = [1, 1, 0, 0, 1, 0, 1, 0]        cost = 47.576                   try = 84
case = [0, 0, 1, 0, 1, 0, 1, 0]        cost = 40.5497                  try = 85
case = [1, 0, 1, 0, 1, 0, 1, 0]        cost = 41.52369999999999        try = 86
case = [0, 1, 1, 0, 1, 0, 1, 0]        cost = 41.68619999999999        try = 87
case = [1, 1, 1, 0, 1, 0, 1, 0]        cost = 40.562599999999996       try = 88
case = [0, 0, 0, 1, 1, 0, 1, 0]        cost = 46.567099999999996       try = 89
case = [1, 0, 0, 1, 1, 0, 1, 0]        cost = 44.6085                  try = 90
case = [0, 1, 0, 1, 1, 0, 1, 0]        cost = 44.11299999999999        try = 91
case = [1, 1, 0, 1, 1, 0, 1, 0]        cost = 40.056799999999996       try = 92
case = [0, 0, 1, 1, 1, 0, 1, 0]        cost = 51.6593                  try = 93
case = [1, 0, 1, 1, 1, 0, 1, 0]        cost = 40.8621                  try = 94
case = [0, 1, 1, 1, 1, 0, 1, 0]        cost = 40.56799999999999        try = 95
case = [1, 1, 1, 1, 1, 0, 1, 0]        cost = 27.673199999999998       try = 96
case = [0, 0, 0, 0, 0, 1, 1, 0]        cost = 27.7361                  try = 97
case = [1, 0, 0, 0, 0, 1, 1, 0]        cost = 39.8125                  try = 98
case = [0, 1, 0, 0, 0, 1, 1, 0]        cost = 40.15239999999999        try = 99
case = [1, 1, 0, 0, 0, 1, 1, 0]        cost = 50.1312                  try = 100
case = [0, 0, 1, 0, 0, 1, 1, 0]        cost = 40.2641                  try = 101
case = [1, 0, 1, 0, 0, 1, 1, 0]        cost = 43.5019                  try = 102
case = [0, 1, 1, 0, 0, 1, 1, 0]        cost = 44.04319999999999        try = 103
case = [1, 1, 1, 0, 0, 1, 1, 0]        cost = 45.1834                  try = 104
case = [0, 0, 0, 1, 0, 1, 1, 0]        cost = 41.1619                  try = 105
case = [1, 0, 0, 1, 0, 1, 1, 0]        cost = 41.4671                  try = 106
case = [0, 1, 0, 1, 0, 1, 1, 0]        cost = 41.3504                  try = 107
case = [1, 1, 0, 1, 0, 1, 1, 0]        cost = 39.55799999999999        try = 108
case = [0, 0, 1, 1, 0, 1, 1, 0]        cost = 48.3197                  try = 109
case = [1, 0, 1, 1, 0, 1, 1, 0]        cost = 39.786300000000004       try = 110
case = [0, 1, 1, 1, 0, 1, 1, 0]        cost = 39.871                   try = 111
case = [1, 1, 1, 1, 0, 1, 1, 0]        cost = 29.239999999999995       try = 112
case = [0, 0, 0, 0, 1, 1, 1, 0]        cost = 39.529399999999995       try = 113
case = [1, 0, 0, 0, 1, 1, 1, 0]        cost = 41.4274                  try = 114
case = [0, 1, 0, 0, 1, 1, 1, 0]        cost = 41.572700000000005       try = 115
case = [1, 1, 0, 0, 1, 1, 1, 0]        cost = 41.3731                  try = 116
case = [0, 0, 1, 0, 1, 1, 1, 0]        cost = 48.4298                  try = 117
case = [1, 0, 1, 0, 1, 1, 1, 0]        cost = 41.4892                  try = 118
case = [0, 1, 1, 0, 1, 1, 1, 0]        cost = 41.8359                  try = 119
case = [1, 1, 1, 0, 1, 1, 1, 0]        cost = 32.79769999999999        try = 120
case = [0, 0, 0, 1, 1, 1, 1, 0]        cost = 50.7102                  try = 121
case = [1, 0, 0, 1, 1, 1, 1, 0]        cost = 40.837                   try = 122
case = [0, 1, 0, 1, 1, 1, 1, 0]        cost = 40.52570000000001        try = 123
case = [1, 1, 0, 1, 1, 1, 1, 0]        cost = 28.554899999999996       try = 124
case = [0, 0, 1, 1, 1, 1, 1, 0]        cost = 54.240399999999994       try = 125
case = [1, 0, 1, 1, 1, 1, 1, 0]        cost = 35.528600000000004       try = 126
case = [0, 1, 1, 1, 1, 1, 1, 0]        cost = 35.418699999999994       try = 127
case = [1, 1, 1, 1, 1, 1, 1, 0]        cost = 14.609300000000003       try = 128
case = [0, 0, 0, 0, 0, 0, 0, 1]        cost = 14.609300000000003       try = 129
case = [1, 0, 0, 0, 0, 0, 0, 1]        cost = 35.4187                  try = 130
case = [0, 1, 0, 0, 0, 0, 0, 1]        cost = 35.528600000000004       try = 131
case = [1, 1, 0, 0, 0, 0, 0, 1]        cost = 54.240399999999994       try = 132
case = [0, 0, 1, 0, 0, 0, 0, 1]        cost = 28.5549                  try = 133
case = [1, 0, 1, 0, 0, 0, 0, 1]        cost = 40.52569999999999        try = 134
case = [0, 1, 1, 0, 0, 0, 0, 1]        cost = 40.836999999999996       try = 135
case = [1, 1, 1, 0, 0, 0, 0, 1]        cost = 50.71019999999999        try = 136
case = [0, 0, 0, 1, 0, 0, 0, 1]        cost = 32.797700000000006       try = 137
case = [1, 0, 0, 1, 0, 0, 0, 1]        cost = 41.83589999999999        try = 138
case = [0, 1, 0, 1, 0, 0, 0, 1]        cost = 41.48919999999999        try = 139
case = [1, 1, 0, 1, 0, 0, 0, 1]        cost = 48.42979999999999        try = 140
case = [0, 0, 1, 1, 0, 0, 0, 1]        cost = 41.3731                  try = 141
case = [1, 0, 1, 1, 0, 0, 0, 1]        cost = 41.5727                  try = 142
case = [0, 1, 1, 1, 0, 0, 0, 1]        cost = 41.4274                  try = 143
case = [1, 1, 1, 1, 0, 0, 0, 1]        cost = 39.5294                  try = 144
case = [0, 0, 0, 0, 1, 0, 0, 1]        cost = 29.240000000000002       try = 145
case = [1, 0, 0, 0, 1, 0, 0, 1]        cost = 39.870999999999995       try = 146
case = [0, 1, 0, 0, 1, 0, 0, 1]        cost = 39.7863                  try = 147
case = [1, 1, 0, 0, 1, 0, 0, 1]        cost = 48.3197                  try = 148
case = [0, 0, 1, 0, 1, 0, 0, 1]        cost = 39.558                   try = 149
case = [1, 0, 1, 0, 1, 0, 0, 1]        cost = 41.35039999999999        try = 150
case = [0, 1, 1, 0, 1, 0, 0, 1]        cost = 41.467099999999995       try = 151
case = [1, 1, 1, 0, 1, 0, 0, 1]        cost = 41.1619                  try = 152
case = [0, 0, 0, 1, 1, 0, 0, 1]        cost = 45.1834                  try = 153
case = [1, 0, 0, 1, 1, 0, 0, 1]        cost = 44.0432                  try = 154
case = [0, 1, 0, 1, 1, 0, 0, 1]        cost = 43.50189999999999        try = 155
case = [1, 1, 0, 1, 1, 0, 0, 1]        cost = 40.2641                  try = 156
case = [0, 0, 1, 1, 1, 0, 0, 1]        cost = 50.1312                  try = 157
case = [1, 0, 1, 1, 1, 0, 0, 1]        cost = 40.1524                  try = 158
case = [0, 1, 1, 1, 1, 0, 0, 1]        cost = 39.8125                  try = 159
case = [1, 1, 1, 1, 1, 0, 0, 1]        cost = 27.736099999999997       try = 160
case = [0, 0, 0, 0, 0, 1, 0, 1]        cost = 27.6732                  try = 161
case = [1, 0, 0, 0, 0, 1, 0, 1]        cost = 40.56799999999999        try = 162
case = [0, 1, 0, 0, 0, 1, 0, 1]        cost = 40.86209999999999        try = 163
case = [1, 1, 0, 0, 0, 1, 0, 1]        cost = 51.65929999999999        try = 164
case = [0, 0, 1, 0, 0, 1, 0, 1]        cost = 40.056799999999996       try = 165
case = [1, 0, 1, 0, 0, 1, 0, 1]        cost = 44.11299999999999        try = 166
case = [0, 1, 1, 0, 0, 1, 0, 1]        cost = 44.60849999999999        try = 167
case = [1, 1, 1, 0, 0, 1, 0, 1]        cost = 46.567099999999996       try = 168
case = [0, 0, 0, 1, 0, 1, 0, 1]        cost = 40.562599999999996       try = 169
case = [1, 0, 0, 1, 0, 1, 0, 1]        cost = 41.68619999999999        try = 170
case = [0, 1, 0, 1, 0, 1, 0, 1]        cost = 41.52369999999999        try = 171
case = [1, 1, 0, 1, 0, 1, 0, 1]        cost = 40.54969999999999        try = 172
case = [0, 0, 1, 1, 0, 1, 0, 1]        cost = 47.57599999999999        try = 173
case = [1, 0, 1, 1, 0, 1, 0, 1]        cost = 39.861                   try = 174
case = [0, 1, 1, 1, 0, 1, 0, 1]        cost = 39.899899999999995       try = 175
case = [1, 1, 1, 1, 0, 1, 0, 1]        cost = 30.0873                  try = 176
case = [0, 0, 0, 0, 1, 1, 0, 1]        cost = 39.0789                  try = 177
case = [1, 0, 0, 0, 1, 1, 0, 1]        cost = 41.79529999999999        try = 178
case = [0, 1, 0, 0, 1, 1, 0, 1]        cost = 41.89479999999999        try = 179
case = [1, 1, 0, 0, 1, 1, 0, 1]        cost = 42.513600000000004       try = 180
case = [0, 0, 1, 0, 1, 1, 0, 1]        cost = 47.8349                  try = 181
case = [1, 0, 1, 0, 1, 1, 0, 1]        cost = 41.71269999999999        try = 182
case = [0, 1, 1, 0, 1, 1, 0, 1]        cost = 42.01359999999999        try = 183
case = [1, 1, 1, 0, 1, 1, 0, 1]        cost = 33.79379999999999        try = 184
case = [0, 0, 0, 1, 1, 1, 0, 1]        cost = 49.72329999999999        try = 185
case = [1, 0, 0, 1, 1, 1, 0, 1]        cost = 40.668499999999995       try = 186
case = [0, 1, 0, 1, 1, 1, 0, 1]        cost = 40.31139999999999        try = 187
case = [1, 1, 0, 1, 1, 1, 0, 1]        cost = 29.158999999999995       try = 188
case = [0, 0, 1, 1, 1, 1, 0, 1]        cost = 53.1091                  try = 189
case = [1, 0, 1, 1, 1, 1, 0, 1]        cost = 35.2157                  try = 190
case = [0, 1, 1, 1, 1, 1, 0, 1]        cost = 35.06                    try = 191
case = [1, 1, 1, 1, 1, 1, 0, 1]        cost = 15.069000000000003       try = 192
case = [0, 0, 0, 0, 0, 0, 1, 1]        cost = 28.5829                  try = 193
case = [1, 0, 0, 0, 0, 0, 1, 1]        cost = 39.6969                  try = 194
case = [0, 1, 0, 0, 0, 0, 1, 1]        cost = 39.69199999999999        try = 195
case = [1, 1, 0, 0, 0, 0, 1, 1]        cost = 48.7084                  try = 196
case = [0, 0, 1, 0, 0, 0, 1, 1]        cost = 40.3837                  try = 197
case = [1, 0, 1, 0, 0, 0, 1, 1]        cost = 42.6591                  try = 198
case = [0, 1, 1, 0, 0, 0, 1, 1]        cost = 42.855599999999995       try = 199
case = [1, 1, 1, 0, 0, 0, 1, 1]        cost = 43.0334                  try = 200
case = [0, 0, 0, 1, 0, 0, 1, 1]        cost = 43.460699999999996       try = 201
case = [1, 0, 0, 1, 0, 0, 1, 1]        cost = 42.8035                  try = 202
case = [0, 1, 0, 1, 0, 0, 1, 1]        cost = 42.34199999999999        try = 203
case = [1, 1, 0, 1, 0, 0, 1, 1]        cost = 39.5872                  try = 204
case = [0, 0, 1, 1, 0, 0, 1, 1]        cost = 49.8913                  try = 205
case = [1, 0, 1, 1, 0, 0, 1, 1]        cost = 40.3955                  try = 206
case = [0, 1, 1, 1, 0, 0, 1, 1]        cost = 40.1354                  try = 207
case = [1, 1, 1, 1, 0, 0, 1, 1]        cost = 28.541999999999998       try = 208
case = [0, 0, 0, 0, 1, 0, 1, 1]        cost = 41.447199999999995       try = 209
case = [1, 0, 0, 0, 1, 0, 1, 1]        cost = 42.382799999999996       try = 210
case = [0, 1, 0, 0, 1, 0, 1, 1]        cost = 42.183299999999996       try = 211
case = [1, 1, 0, 0, 1, 0, 1, 1]        cost = 41.0213                  try = 212
case = [0, 0, 1, 0, 1, 0, 1, 1]        cost = 49.62039999999999        try = 213
case = [1, 0, 1, 0, 1, 0, 1, 1]        cost = 41.7174                  try = 214
case = [0, 1, 1, 0, 1, 0, 1, 1]        cost = 41.7193                  try = 215
case = [1, 1, 1, 0, 1, 0, 1, 1]        cost = 31.7187                  try = 216
case = [0, 0, 0, 1, 1, 0, 1, 1]        cost = 54.080000000000005       try = 217
case = [1, 0, 0, 1, 1, 0, 1, 1]        cost = 43.2444                  try = 218
case = [0, 1, 0, 1, 1, 0, 1, 1]        cost = 42.588300000000004       try = 219
case = [1, 1, 0, 1, 1, 0, 1, 1]        cost = 29.6551                  try = 220
case = [0, 0, 1, 1, 1, 0, 1, 1]        cost = 56.883                   try = 221
case = [1, 0, 1, 1, 1, 0, 1, 1]        cost = 37.208800000000004       try = 222
case = [0, 1, 1, 1, 1, 0, 1, 1]        cost = 36.7541                  try = 223
case = [1, 1, 1, 1, 1, 0, 1, 1]        cost = 14.982300000000002       try = 224
case = [0, 0, 0, 0, 0, 1, 1, 1]        cost = 39.331599999999995       try = 225
case = [1, 0, 0, 0, 0, 1, 1, 1]        cost = 42.53099999999999        try = 226
case = [0, 1, 0, 0, 0, 1, 1, 1]        cost = 42.71029999999999        try = 227
case = [1, 1, 0, 0, 0, 1, 1, 1]        cost = 43.81209999999999        try = 228
case = [0, 0, 1, 0, 0, 1, 1, 1]        cost = 49.57039999999999        try = 229
case = [1, 0, 1, 0, 0, 1, 1, 1]        cost = 43.93119999999999        try = 230
case = [0, 1, 1, 0, 0, 1, 1, 1]        cost = 44.311899999999994       try = 231
case = [1, 1, 1, 0, 0, 1, 1, 1]        cost = 36.57509999999999        try = 232
case = [0, 0, 0, 1, 0, 1, 1, 1]        cost = 48.910399999999996       try = 233
case = [1, 0, 0, 1, 0, 1, 1, 1]        cost = 40.33859999999999        try = 234
case = [0, 1, 0, 1, 0, 1, 1, 1]        cost = 40.06129999999999        try = 235
case = [1, 1, 0, 1, 0, 1, 1, 1]        cost = 29.391899999999993       try = 236
case = [0, 0, 1, 1, 0, 1, 1, 1]        cost = 53.778999999999996       try = 237
case = [1, 0, 1, 1, 0, 1, 1, 1]        cost = 36.3686                  try = 238
case = [0, 1, 1, 1, 0, 1, 1, 1]        cost = 36.292699999999996       try = 239
case = [1, 1, 1, 1, 0, 1, 1, 1]        cost = 16.7847                  try = 240
case = [0, 0, 0, 0, 1, 1, 1, 1]        cost = 48.970899999999986       try = 241
case = [1, 0, 0, 0, 1, 1, 1, 1]        cost = 41.99189999999999        try = 242
case = [0, 1, 0, 0, 1, 1, 1, 1]        cost = 41.97659999999999        try = 243
case = [1, 1, 0, 0, 1, 1, 1, 1]        cost = 32.9                     try = 244
case = [0, 0, 1, 0, 1, 1, 1, 1]        cost = 55.58209999999999        try = 245
case = [1, 0, 1, 0, 1, 1, 1, 1]        cost = 39.7645                  try = 246
case = [0, 1, 1, 0, 1, 1, 1, 1]        cost = 39.950599999999994       try = 247
case = [1, 1, 1, 0, 1, 1, 1, 1]        cost = 22.035400000000003       try = 248
case = [0, 0, 0, 1, 1, 1, 1, 1]        cost = 56.3047                  try = 249
case = [1, 0, 0, 1, 1, 1, 1, 1]        cost = 37.5545                  try = 250
case = [0, 1, 0, 1, 1, 1, 1, 1]        cost = 37.08259999999999        try = 251
case = [1, 1, 0, 1, 1, 1, 1, 1]        cost = 16.2348                  try = 252
case = [0, 0, 1, 1, 1, 1, 1, 1]        cost = 57.5457                  try = 253
case = [1, 0, 1, 1, 1, 1, 1, 1]        cost = 29.9569                  try = 254
case = [0, 1, 1, 1, 1, 1, 1, 1]        cost = 29.686400000000003       try = 255
case = [1, 1, 1, 1, 1, 1, 1, 1]        cost = 0.0                      try = 256

Best case(solution) = [1, 1, 0, 0, 0, 0, 0, 0]        cost = 57.5457                 
In [60]:
draw_graph(G, colors_brute, pos)

Method 2: QAOA¶

In [61]:
from scipy.optimize import minimize
from tqdm import tqdm

expectation = get_expectation(G)
p = 64
res = []
for i in tqdm(range(1, p+1)):
    res.append(minimize(expectation, np.ones(i*2)*np.pi/2,
               method='COBYLA', options={'maxiter': 2500}))
100%|██████████| 64/64 [27:58<00:00, 26.23s/it]
In [62]:
approx = []
for i in range(p):
    approx.append(-res[i].fun/best_cost_brute)

x = np.arange(1, p+1, 1)

plt.plot(x, approx, marker='+', c='k', linestyle='--')
plt.ylim((0.5, 1))
plt.xlim(0, p+1)
plt.xlabel('p')
plt.ylabel('Approximation')
plt.grid(True)
plt.show()
In [63]:
best_p = np.argmax(approx)
print("The best p(iterationo number) is", best_p)
The best p(iterationo number) is 57
In [64]:
res[best_p].x
Out[64]:
array([1.57577829, 1.54388049, 1.58209588, 1.60281273, 2.56052551,
       1.57314439, 2.56466951, 1.52007076, 1.57820612, 1.56277248,
       1.57623348, 2.59839596, 1.54444048, 1.6479297 , 1.58009684,
       1.55222522, 1.57635676, 1.56579821, 1.58043726, 1.76067217,
       1.42911033, 1.60019109, 1.57166335, 1.55694312, 1.57960909,
       1.55750074, 1.5767738 , 1.54088147, 1.52165954, 1.53975529,
       1.58449465, 1.3773996 , 1.65797865, 1.63753202, 1.55340365,
       2.62242997, 1.78190415, 1.35180381, 1.59231624, 1.57478992,
       1.58218233, 1.5666588 , 1.74187213, 1.36842205, 1.57715449,
       1.571707  , 1.62154342, 1.514356  , 1.57692417, 1.54765546,
       1.60389693, 1.51716819, 1.61964165, 1.47330701, 1.67744074,
       1.5502594 , 1.5734684 , 1.56967731, 1.570653  , 1.56515074,
       1.5704142 , 1.57042925, 1.56496001, 1.57696357, 1.56068875,
       1.5785302 , 1.57847397, 1.5606887 , 1.5633145 , 1.55764079,
       1.56082852, 1.5788217 , 1.57166813, 1.57362432, 1.5571596 ,
       1.58828948, 1.59019565, 1.54934253, 1.56742299, 1.57611999,
       1.56862114, 1.55780954, 1.60639211, 1.59525523, 1.56224008,
       1.5815893 , 1.57422963, 1.57101226, 1.53878869, 1.53891379,
       1.56581372, 1.57123284, 1.51864517, 1.62237731, 1.58666675,
       1.55314611, 1.58496987, 1.62318188, 1.51408127, 1.55869817,
       1.549641  , 1.56472834, 1.57194123, 1.57087985, 1.56918835,
       1.59953124, 1.5511578 , 1.57147711, 1.57149666, 1.55757601,
       1.55718731, 1.56991959, 1.57019535, 1.58124579, 1.58160449,
       1.55950224])
In [65]:
from qiskit.visualization import plot_histogram
backend = Aer.get_backend('aer_simulator')
backend.shots = 512

qc_res = create_qaoa_circ(G, res[best_p].x)
counts = backend.run(qc_res, seed_simulator=10).result().get_counts()
plot_histogram(counts, figsize=(40, 5), title='512 Shots')
Out[65]:
In [66]:
# # Plot the Quantum Circuit of QAOA
# qc_res.draw(output='mpl', plot_barriers=True).savefig('QAOA_circuit.png', dpi=720)
# qc_res.draw(output='mpl', plot_barriers=True)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [66], in <cell line: 2>()
      1 # Plot the Quantum Circuit of QAOA
----> 2 qc_res.draw(output='mpl', plot_barriers=True).savefig('QAOA_circuit.png', dpi=720)
      3 qc_res.draw(output='mpl', plot_barriers=True)

File ~\anaconda3\lib\site-packages\matplotlib\figure.py:3019, in Figure.savefig(self, fname, transparent, **kwargs)
   3015     for ax in self.axes:
   3016         stack.enter_context(
   3017             ax.patch._cm_set(facecolor='none', edgecolor='none'))
-> 3019 self.canvas.print_figure(fname, **kwargs)

File ~\anaconda3\lib\site-packages\matplotlib\backend_bases.py:2319, in FigureCanvasBase.print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, pad_inches, bbox_extra_artists, backend, **kwargs)
   2315 try:
   2316     # _get_renderer may change the figure dpi (as vector formats
   2317     # force the figure dpi to 72), so we need to set it again here.
   2318     with cbook._setattr_cm(self.figure, dpi=dpi):
-> 2319         result = print_method(
   2320             filename,
   2321             facecolor=facecolor,
   2322             edgecolor=edgecolor,
   2323             orientation=orientation,
   2324             bbox_inches_restore=_bbox_inches_restore,
   2325             **kwargs)
   2326 finally:
   2327     if bbox_inches and restore_bbox:

File ~\anaconda3\lib\site-packages\matplotlib\backend_bases.py:1648, in _check_savefig_extra_args.<locals>.wrapper(*args, **kwargs)
   1640     _api.warn_deprecated(
   1641         '3.3', name=name, removal='3.6',
   1642         message='%(name)s() got unexpected keyword argument "'
   1643                 + arg + '" which is no longer supported as of '
   1644                 '%(since)s and will become an error '
   1645                 '%(removal)s')
   1646     kwargs.pop(arg)
-> 1648 return func(*args, **kwargs)

File ~\anaconda3\lib\site-packages\matplotlib\_api\deprecation.py:412, in delete_parameter.<locals>.wrapper(*inner_args, **inner_kwargs)
    402     deprecation_addendum = (
    403         f"If any parameter follows {name!r}, they should be passed as "
    404         f"keyword, not positionally.")
    405     warn_deprecated(
    406         since,
    407         name=repr(name),
   (...)
    410                  else deprecation_addendum,
    411         **kwargs)
--> 412 return func(*inner_args, **inner_kwargs)

File ~\anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py:540, in FigureCanvasAgg.print_png(self, filename_or_obj, metadata, pil_kwargs, *args)
    490 @_check_savefig_extra_args
    491 @_api.delete_parameter("3.5", "args")
    492 def print_png(self, filename_or_obj, *args,
    493               metadata=None, pil_kwargs=None):
    494     """
    495     Write the figure to a PNG file.
    496 
   (...)
    538         *metadata*, including the default 'Software' key.
    539     """
--> 540     FigureCanvasAgg.draw(self)
    541     mpl.image.imsave(
    542         filename_or_obj, self.buffer_rgba(), format="png", origin="upper",
    543         dpi=self.figure.dpi, metadata=metadata, pil_kwargs=pil_kwargs)

File ~\anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py:431, in FigureCanvasAgg.draw(self)
    429 def draw(self):
    430     # docstring inherited
--> 431     self.renderer = self.get_renderer(cleared=True)
    432     # Acquire a lock on the shared font cache.
    433     with RendererAgg.lock, \
    434          (self.toolbar._wait_cursor_for_draw_cm() if self.toolbar
    435           else nullcontext()):

File ~\anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py:447, in FigureCanvasAgg.get_renderer(self, cleared)
    444 reuse_renderer = (hasattr(self, "renderer")
    445                   and getattr(self, "_lastKey", None) == key)
    446 if not reuse_renderer:
--> 447     self.renderer = RendererAgg(w, h, self.figure.dpi)
    448     self._lastKey = key
    449 elif cleared:

File ~\anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py:93, in RendererAgg.__init__(self, width, height, dpi)
     91 self.width = width
     92 self.height = height
---> 93 self._renderer = _RendererAgg(int(width), int(height), dpi)
     94 self._filter_renderers = []
     96 self._update_methods()

ValueError: Image size of 16218x619698 pixels is too large. It must be less than 2^16 in each direction.
In [67]:
# qc_res.draw(output='mpl', plot_barriers=True, fold=-1).savefig('QAOA_circuit(full).png', dpi=720)
# qc_res.draw(output='mpl', plot_barriers=True, fold=-1)
In [68]:
xbest_qaoa = list(map(int, list(max(counts, key=counts.get))))
colors_qaoa = ['g' if xbest_qaoa[i] == 0 else 'c' for i in range(n)]

print('xbest_brute :', xbest_brute)
print('QAOA        :', xbest_qaoa)
xbest_brute : [1, 1, 0, 0, 0, 0, 0, 0]
QAOA        : [0, 0, 0, 1, 1, 1, 1, 1]
In [69]:
draw_graph(G, colors_brute, pos)
In [70]:
draw_graph(G, colors_qaoa, pos)
In [71]:
print('Best solution - Brute Force : ' +
      str(xbest_brute) + ',  cost = ' + str(best_cost_brute))
print('Best solution - QAOA        : ' + str(xbest_qaoa) +
      ',  cost = ' + str(-res[best_p].fun))
Best solution - Brute Force : [1, 1, 0, 0, 0, 0, 0, 0],  cost = 57.5457
Best solution - QAOA        : [0, 0, 0, 1, 1, 1, 1, 1],  cost = 45.17890615234373
In [72]:
# Visualising the clusters
x = data_iris_qaoa
y = np.array(xbest_brute)

plt.figure(figsize=(12, 8))
plt.scatter(x[y == 0, 0], x[y == 0, 1], s=100, c='purple', label='Cluster A')
plt.scatter(x[y == 1, 0], x[y == 1, 1], s=100, c='orange', label='Cluster B')
plt.title('Clustering using Brute-Force')
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
plt.legend(title="Species",  loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()

# Plotting the centroids of the clusters
# plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=100, c='red', label='Centroids')
# plt.legend(title="Species",  loc='center left', bbox_to_anchor=(1, 0.5))
In [73]:
# Visualising the clusters
x = data_iris_qaoa
y = np.array(xbest_qaoa)

plt.figure(figsize=(12, 8))
plt.scatter(x[y == 0, 0], x[y == 0, 1], s=100, c='purple', label='Cluster A')
plt.scatter(x[y == 1, 0], x[y == 1, 1], s=100, c='orange', label='Cluster B')
plt.title('Clustering using QAOA')
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
plt.legend(title="Species",  loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()

# Plotting the centroids of the clusters
# plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=100, c='red', label='Centroids')
# plt.legend(title="Species",  loc='center left', bbox_to_anchor=(1, 0.5))
In [74]:
# Visualising the clusters
x = data_iris_qaoa
y = np.array(data_iris_qaoa_label)

plt.figure(figsize=(12, 8))
plt.scatter(x[y == 0, 0], x[y == 0, 1], s=100,
            c='purple', label='Cluster A(setosa)')
plt.scatter(x[y == 1, 0], x[y == 1, 1], s=100,
            c='orange', label='Cluster B(versicolor)')
plt.scatter(x[y == 2, 0], x[y == 2, 1], s=100,
            c='orange', label='Cluster B(virginica)')
plt.title('Clustering with True labels')
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
plt.legend(title="Species",  loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()

# Plotting the centroids of the clusters
# plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=100, c='red', label='Centroids')
# plt.legend(title="Species",  loc='center left', bbox_to_anchor=(1, 0.5))

Comparison with K-means Clustering¶

In [75]:
import os
os.environ["OMP_NUM_THREADS"] = '1'
In [76]:
# Finding the optimum number of clusters for k-means classification
from sklearn.cluster import KMeans
x = data_iris_sample.iloc[:, 0:4]
x
Out[76]:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)
15 5.7 4.4 1.5 0.4
27 5.2 3.5 1.5 0.2
114 5.8 2.8 5.1 2.4
117 7.7 3.8 6.7 2.2
125 7.2 3.2 6.0 1.8
126 6.2 2.8 4.8 1.8
140 6.7 3.1 5.6 2.4
141 6.9 3.1 5.1 2.3
In [77]:
# Finding the optimum number of clusters for k-means classification
wcss = []
max_num_cluster = len(data_iris_sample)

for i in range(1, max_num_cluster):
    kmeans = KMeans(n_clusters=i, init='k-means++',
                    max_iter=300, n_init=10, random_state=0)
    kmeans.fit(x)
    wcss.append(kmeans.inertia_)

plt.plot(range(1, max_num_cluster), wcss)
plt.xticks(np.arange(1, max_num_cluster, step=1))  # Set label locations.
plt.title('The elbow method')
plt.xlabel('Number of clusters')
plt.ylabel('Within Cluster Sum of Squares')  # within cluster sum of squares
plt.show()
C:\Users\user1\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:1036: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=1.
  warnings.warn(
In [78]:
num_cluster = 2

kmeans = KMeans(n_clusters=num_cluster, init='k-means++',
                max_iter=300, n_init=10, random_state=0)
y_kmeans = kmeans.fit_predict(x)

x.iloc[0:10, :]

y_kmeans
Out[78]:
array([1, 1, 0, 0, 0, 0, 0, 0])
In [79]:
y_kmeans
Out[79]:
array([1, 1, 0, 0, 0, 0, 0, 0])
In [80]:
# Visualising the clusters
plt.figure(figsize=(12, 8))
plt.scatter(x.iloc[y_kmeans == 0, 0], x.iloc[y_kmeans ==
            0, 1], s=100, c='purple', label='Cluster A')
plt.scatter(x.iloc[y_kmeans == 1, 0], x.iloc[y_kmeans ==
            1, 1], s=100, c='orange', label='Cluster B')
# plt.scatter(x.iloc[y_kmeans == 2, 0], x.iloc[y_kmeans == 2, 1], s=100, c='green', label='Cluster C')
plt.title('Clustering with K-Means')
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
plt.legend(title="Species",  loc='center left', bbox_to_anchor=(1, 0.5))


# Plotting the centroids of the clusters
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[
            :, 1], s=100, marker="v", c='red', label='Centroids')
plt.legend(title="Species",  loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()

Silhoutte Score¶

Code with example¶

In [81]:
from sklearn.cluster import KMeans
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')

np.random.seed(100)
num_data = 50

x11 = np.linspace(0.3, 0.7, 20)
x12 = np.linspace(1.3, 1.8, 15)
x13 = np.linspace(2.4, 3, 15)
x1 = np.concatenate((x11, x12, x13), axis=None)
error = np.random.normal(1, 0.5, num_data)
x2 = 1.5*x1+2+error
In [82]:
fig = plt.figure(figsize=(7, 7))
fig.set_facecolor('white')
plt.scatter(x1, x2, color='k')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
In [83]:
X = np.stack([x1, x2], axis=1)
init = np.array([[2., 4.], [1., 5.], [2.5, 6.]])

kmeans = KMeans(n_clusters=3, init=init)
kmeans.fit(X)
labels = kmeans.labels_
In [84]:
def get_silhouette_results(X, labels):
    def get_sum_distance(target_x, target_cluster):
        res = np.sum([np.linalg.norm(target_x-x) for x in target_cluster])
        return res

    '''
    각 데이터 포인트를 돌면서 a(i), b(i)를 계산
    그리고 s(i)를 계산한다.
    
    마지막으로 Silhouette(실루엣) Coefficient를 계산한다.
    '''
    uniq_labels = np.unique(labels)
    silhouette_val_list = []
    for i in range(len(labels)):
        target_data = X[i]

        # calculate a(i)
        target_label = labels[i]
        target_cluster_data_idx = np.where(labels == target_label)[0]
        if len(target_cluster_data_idx) == 1:
            silhouette_val_list.append(0)
            continue
        else:
            target_cluster_data = X[target_cluster_data_idx]
            temp1 = get_sum_distance(target_data, target_cluster_data)
            a_i = temp1/(target_cluster_data.shape[0]-1)

        # calculate b(i)
        b_i_list = []
        label_list = uniq_labels[np.unique(labels) != target_label]
        for ll in label_list:
            other_cluster_data_idx = np.where(labels == ll)[0]
            other_cluster_data = X[other_cluster_data_idx]
            temp2 = get_sum_distance(target_data, other_cluster_data)
            temp_b_i = temp2/other_cluster_data.shape[0]
            b_i_list.append(temp_b_i)

        b_i = min(b_i_list)
        s_i = (b_i-a_i)/max(a_i, b_i)
        silhouette_val_list.append(s_i)

    silhouette_coef_list = []
    for ul in uniq_labels:
        temp3 = np.mean(
            [s for s, l in zip(silhouette_val_list, labels) if l == ul])
        silhouette_coef_list.append(temp3)

    silhouette_coef = max(silhouette_coef_list)
    return (silhouette_coef, np.array(silhouette_val_list))
In [85]:
silhouette_coef, silhouette_val_list = get_silhouette_results(X, labels)
print(silhouette_coef)
0.7434423527756951
In [86]:
import seaborn as sns

# 각 클러스터별로 Silhouette(실루엣) 값을 정렬한다.
uniq_labels = np.unique(labels)
sorted_cluster_svl = []
rearr_labels = []
for ul in uniq_labels:
    labels_idx = np.where(labels == ul)[0]
    target_svl = silhouette_val_list[labels_idx]
    sorted_cluster_svl += sorted(target_svl)
    rearr_labels += [ul]*len(target_svl)

colors = sns.color_palette('hls', len(uniq_labels))
color_labels = [colors[i] for i in rearr_labels]

fig = plt.figure(figsize=(6, 10))
fig.set_facecolor('white')
plt.barh(range(len(sorted_cluster_svl)),
         sorted_cluster_svl, color=color_labels)
plt.ylabel('Data Index')
plt.xlabel('Silhouette Value')
plt.axvline(x=np.mean(sorted_cluster_svl),
            color='black', label='avg. Silhouette score')
plt.text(np.mean(sorted_cluster_svl)+0.02, -1,
         'Avg. \nSilhouette score='+str(round(np.mean(sorted_cluster_svl), 4)))
plt.show()

Brute Force - Silhoutte Score¶

In [87]:
X = data_iris_qaoa
labels = np.array(xbest_brute)
silhouette_coef, silhouette_val_list = get_silhouette_results(X, labels)
print(silhouette_coef)
0.7812869798143147
In [88]:
import seaborn as sns

# 각 클러스터별로 Silhouette(실루엣) 값을 정렬한다.
uniq_labels = np.unique(labels)
sorted_cluster_svl = []
rearr_labels = []
for ul in uniq_labels:
    labels_idx = np.where(labels == ul)[0]
    target_svl = silhouette_val_list[labels_idx]
    sorted_cluster_svl += sorted(target_svl)
    rearr_labels += [ul]*len(target_svl)

colors = sns.color_palette('hls', len(uniq_labels))
color_labels = [colors[i] for i in rearr_labels]

fig = plt.figure(figsize=(6, 10))
fig.set_facecolor('white')
plt.barh(range(len(sorted_cluster_svl)),
         sorted_cluster_svl, color=color_labels)
plt.title('Silhoutte Score of Brute Force')
plt.ylabel('Data Index')
plt.xlabel('Silhouette Value')
plt.axvline(x=np.mean(sorted_cluster_svl),
            color='black', label='avg. Silhouette score')
plt.text(np.mean(sorted_cluster_svl)+0.02, -1,
         'Avg. \nSilhouette score='+str(round(np.mean(sorted_cluster_svl), 4)))
plt.show()

QAOA - Silhoutte Score¶

In [89]:
X = data_iris_qaoa
labels = np.array(xbest_qaoa)
silhouette_coef, silhouette_val_list = get_silhouette_results(X, labels)
print(silhouette_coef)
0.6297123253199175
In [90]:
import seaborn as sns

# 각 클러스터별로 Silhouette(실루엣) 값을 정렬한다.
uniq_labels = np.unique(labels)
sorted_cluster_svl = []
rearr_labels = []
for ul in uniq_labels:
    labels_idx = np.where(labels == ul)[0]
    target_svl = silhouette_val_list[labels_idx]
    sorted_cluster_svl += sorted(target_svl)
    rearr_labels += [ul]*len(target_svl)

colors = sns.color_palette('hls', len(uniq_labels))
color_labels = [colors[i] for i in rearr_labels]

fig = plt.figure(figsize=(6, 10))
fig.set_facecolor('white')
plt.barh(range(len(sorted_cluster_svl)),
         sorted_cluster_svl, color=color_labels)
plt.title('Silhoutte Score of QAOA')
plt.ylabel('Data Index')
plt.xlabel('Silhouette Value')
plt.axvline(x=np.mean(sorted_cluster_svl),
            color='black', label='avg. Silhouette score')
plt.text(np.mean(sorted_cluster_svl)+0.02, -1,
         'Avg. \nSilhouette score='+str(round(np.mean(sorted_cluster_svl), 4)))
plt.show()

K-Means - Silhoutte Score¶

In [91]:
X = data_iris_qaoa
labels = y_kmeans
silhouette_coef, silhouette_val_list = get_silhouette_results(X, labels)
print(silhouette_coef)
0.7812869798143147
In [92]:
type(xbest_brute)
Out[92]:
list
In [93]:
import seaborn as sns

# 각 클러스터별로 Silhouette(실루엣) 값을 정렬한다.
uniq_labels = np.unique(labels)
sorted_cluster_svl = []
rearr_labels = []
for ul in uniq_labels:
    labels_idx = np.where(labels == ul)[0]
    target_svl = silhouette_val_list[labels_idx]
    sorted_cluster_svl += sorted(target_svl)
    rearr_labels += [ul]*len(target_svl)

colors = sns.color_palette('hls', len(uniq_labels))
color_labels = [colors[i] for i in rearr_labels]

fig = plt.figure(figsize=(6, 10))
fig.set_facecolor('white')
plt.barh(range(len(sorted_cluster_svl)),
         sorted_cluster_svl, color=color_labels)
plt.title('Silhoutte Score of K-Means')
plt.ylabel('Data Index')
plt.xlabel('Silhouette Value')
plt.axvline(x=np.mean(sorted_cluster_svl),
            color='black', label='avg. Silhouette score')
plt.text(np.mean(sorted_cluster_svl)+0.02, -1,
         'Avg. \nSilhouette score='+str(round(np.mean(sorted_cluster_svl), 4)))
plt.show()

True Label- Silhoutte Score¶

In [94]:
data_iris_qaoa_label
data_iris_qaoa_label2 = np.where(
    data_iris_qaoa_label > 1, 1, data_iris_qaoa_label)
data_iris_qaoa_label2
Out[94]:
array([0, 0, 1, 1, 1, 1, 1, 1])
In [95]:
X = data_iris_qaoa
labels = data_iris_qaoa_label2
silhouette_coef, silhouette_val_list = get_silhouette_results(X, labels)
print(silhouette_coef)
0.7812869798143147
In [96]:
import seaborn as sns

# 각 클러스터별로 Silhouette(실루엣) 값을 정렬한다.
uniq_labels = np.unique(labels)
sorted_cluster_svl = []
rearr_labels = []
for ul in uniq_labels:
    labels_idx = np.where(labels == ul)[0]
    target_svl = silhouette_val_list[labels_idx]
    sorted_cluster_svl += sorted(target_svl)
    rearr_labels += [ul]*len(target_svl)

colors = sns.color_palette('hls', len(uniq_labels))
color_labels = [colors[i] for i in rearr_labels]

fig = plt.figure(figsize=(6, 10))
fig.set_facecolor('white')
plt.barh(range(len(sorted_cluster_svl)),
         sorted_cluster_svl, color=color_labels)
plt.title('Silhoutte Score of True Label')
plt.ylabel('Data Index')
plt.xlabel('Silhouette Value')
plt.axvline(x=np.mean(sorted_cluster_svl),
            color='black', label='avg. Silhouette score')
plt.text(np.mean(sorted_cluster_svl)+0.02, -1,
         'Avg. \nSilhouette score='+str(round(np.mean(sorted_cluster_svl), 4)))
plt.show()
In [ ]:
 

Dunn Index¶

In [97]:
import numpy as np
import random
import matplotlib.pyplot as plt

from itertools import combinations

Code with example¶

In [98]:
import numpy as np
import random
import matplotlib.pyplot as plt

from itertools import combinations

np.random.seed(100)
num_data = 20

x1 = np.linspace(0.3, 0.7, num_data)
error = np.random.normal(1, 0.5, num_data)
x2 = 1.5*x1+2+error

X = np.stack([x1, x2], axis=1)
X
Out[98]:
array([[0.3       , 2.57511726],
       [0.32105263, 3.65291915],
       [0.34210526, 4.0896758 ],
       [0.36315789, 3.41851882],
       [0.38421053, 4.06697618],
       [0.40526316, 3.86500416],
       [0.42631579, 3.75006352],
       [0.44736842, 3.13603097],
       [0.46842105, 3.60788366],
       [0.48947368, 3.86171125],
       [0.51052632, 3.53677598],
       [0.53157895, 4.01495017],
       [0.55263158, 3.53714984],
       [0.57368421, 4.26894985],
       [0.59473684, 4.22846567],
       [0.61578947, 3.87147864],
       [0.63684211, 3.68962297],
       [0.65789474, 4.50170845],
       [0.67894737, 3.79935324],
       [0.7       , 3.49084088]])
In [99]:
fig = plt.figure(figsize=(7, 7))
fig.set_facecolor('white')
plt.scatter(x1, x2, color='k')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
In [ ]:
 

클러스터 내 거리 측도 Intra-Cluster distance measure¶

  • a. Complete Diameter Distance: This function below calculates the maximum distance between two points within the cluster. (This version was proposed by Dunn).
In [100]:
def complete_diameter_distance(X):
    res = []
    for i, j in combinations(range(X.shape[0]), 2):
        a_i = X[i, :]
        a_j = X[j, :]
        res.append(np.linalg.norm(a_i-a_j))

    return np.max(res)
In [101]:
complete_diameter_distance(X)
Out[101]:
1.9595515390777758
  • b. Average Diamiter Distance: This function calculates the mean distance between all pairs within the same cluster.
In [102]:
def average_diameter_distance(X):
    res = []
    for i, j in combinations(range(X.shape[0]), 2):
        a_i = X[i, :]
        a_j = X[j, :]
        res.append(np.linalg.norm(a_i-a_j))

    return np.mean(res)
In [103]:
average_diameter_distance(X)
Out[103]:
0.5159584769337329
  • c. Average Diamiter Distance: This function calculates distance of all the points from the mean within the cluster.
In [104]:
def centroid_diameter_distance(X):
    center = np.mean(X, axis=0)
    res = 2*np.mean([np.linalg.norm(x-center) for x in X])

    return res
In [105]:
centroid_diameter_distance(X)
Out[105]:
0.6874635793987067

클러스터 간 거리 측도 Inter-Cluster distance measure¶

  • a. Single Linkage Distance: This function below calculates the minimum distance between clusters.
In [106]:
np.random.seed(100)

x11 = np.linspace(0.3, 0.7, 20)
label1 = [0]*len(x11)
x12 = np.linspace(1.3, 1.8, 15)
label2 = [1]*len(x12)
error = np.random.normal(1, 0.5, 35)
x1 = np.concatenate((x11, x12), axis=None)
x2 = 1.5*x1+2+error
labels = label1+label2

X = np.stack((x1, x2), axis=1)

labels = np.array(labels)
X1 = X[np.where(labels == 0)[0], :]
X2 = X[np.where(labels == 1)[0], :]
In [107]:
fig = plt.figure(figsize=(7, 7))
fig.set_facecolor('white')
for i, x in enumerate(X):
    if labels[i] == 0:
        plt.scatter(x[0], x[1], color='blue')
    else:
        plt.scatter(x[0], x[1], color='red')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
In [108]:
def single_linkage_distance(X1, X2):
    res = []
    for x1 in X1:
        for x2 in X2:
            res.append(np.linalg.norm(x1-x2))
    return np.min(res)
In [109]:
single_linkage_distance(X1, X2)
Out[109]:
0.7724228550378145
  • b. Complete Linkage Distance: This function below calculates the maximum distance between clusters.
In [110]:
def complete_linkage_distance(X1, X2):
    res = []
    for x1 in X1:
        for x2 in X2:
            res.append(np.linalg.norm(x1-x2))
    return np.max(res)
In [111]:
complete_linkage_distance(X1, X2)
Out[111]:
3.807983171195838
  • c. Average Linkage Distance: This function below calculates the minimum distance between clusters.
In [112]:
def average_linkage_distance(X1, X2):
    res = []
    for x1 in X1:
        for x2 in X2:
            res.append(np.linalg.norm(x1-x2))
    return np.mean(res)
In [113]:
average_linkage_distance(X1, X2)
Out[113]:
2.0502961616379003
  • d. Centroid Linkage Distance: This function below calculates distance between centoids of two clusters.
In [114]:
def centroid_linkage_distance(X1, X2):
    center1 = np.mean(X1, axis=0)
    center2 = np.mean(X2, axis=0)
    return np.linalg.norm(center1-center2)
In [115]:
centroid_linkage_distance(X1, X2)
Out[115]:
2.023846293346597
  • e. Average of Centroids Linkage Distance: This function below calculates average value of the distances between a centroid from one cluster and the data points from other clusters.
In [116]:
def average_of_centroids_linkage_distance(X1, X2):
    center1 = np.mean(X1, axis=0)
    center2 = np.mean(X2, axis=0)
    res = []
    for x1 in X1:
        res.append(np.linalg.norm(x1-center2))
    for x2 in X2:
        res.append(np.linalg.norm(x2-center1))

    return np.mean(res)
In [117]:
average_of_centroids_linkage_distance(X1, X2)
Out[117]:
2.035733790732974

Dunn Index 파이썬 구현¶

In [118]:
np.random.seed(100)
num_data = 50

x11 = np.linspace(0.3, 0.7, 20)
label1 = [0]*len(x11)
x12 = np.linspace(1.3, 1.8, 15)
label2 = [1]*len(x12)
x13 = np.linspace(2.4, 3, 15)
label3 = [2]*len(x13)
x1 = np.concatenate((x11, x12, x13), axis=None)
error = np.random.normal(1, 0.5, num_data)
x2 = 1.5*x1+2+error

X = np.stack((x1, x2), axis=1)
labels = np.array(label1+label2+label3)
In [119]:
fig = plt.figure(figsize=(7, 7))
fig.set_facecolor('white')
for i, x in enumerate(X):
    if labels[i] == 0:
        plt.scatter(x[0], x[1], color='blue')
    elif labels[i] == 1:
        plt.scatter(x[0], x[1], color='red')
    else:
        plt.scatter(x[0], x[1], color='green')
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
In [120]:
def Dunn_index(X, labels, intra_cluster_distance_type, inter_cluster_distance_type):
    intra_cdt_dict = {
        'cmpl_dd': complete_diameter_distance,
        'avdd': average_diameter_distance,
        'cent_dd': centroid_diameter_distance
    }
    inter_cdt_dict = {
        'sld': single_linkage_distance,
        'cmpl_ld': complete_linkage_distance,
        'avld': average_linkage_distance,
        'cent_ld': centroid_linkage_distance,
        'av_cent_ld': average_of_centroids_linkage_distance
    }
    # intra cluster distance
    intra_cluster_distance = intra_cdt_dict[intra_cluster_distance_type]

    # inter cluster distance
    inter_cluster_distance = inter_cdt_dict[inter_cluster_distance_type]

    # get minimum value of inter_cluster_distance
    res1 = []
    for i, j in combinations(np.unique(labels), 2):
        X1 = X[np.where(labels == i)[0], :]
        X2 = X[np.where(labels == j)[0], :]
        res1.append(inter_cluster_distance(X1, X2))
    min_inter_cd = np.min(res1)

    # get maximum value of intra_cluser_distance

    res2 = []
    for label in np.unique(labels):
        X_target = X[np.where(labels == label)[0], :]
        if X_target.shape[0] >= 2:
            res2.append(intra_cluster_distance(X_target))
        else:
            res2.append(0)
    max_intra_cd = np.max(res2)

    Dunn_idx = min_inter_cd/max_intra_cd
    return Dunn_idx
In [121]:
intra_cluster_distance_type = ['cmpl_dd', 'avdd', 'cent_dd']
inter_cluster_distance_type = [
    'sld', 'cmpl_ld', 'avld', 'cent_ld', 'av_cent_ld']

for i in range(len(intra_cluster_distance_type)):
    for j in range(len(inter_cluster_distance_type)):
        print("Dunn Index:", "Intra Cluster Dist.:", '%-10s' % intra_cluster_distance_type[i],
              "Inter Cluster Dist.:", '%-12s' % inter_cluster_distance_type[j],
              "Dunn Index Valule:", Dunn_index(X, labels, intra_cluster_distance_type[i], inter_cluster_distance_type[j]))
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: sld          Dunn Index Valule: 0.2780279425885912
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 1.5816104606529293
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: avld         Dunn Index Valule: 0.7627361333011984
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: cent_ld      Dunn Index Valule: 0.7374610426286897
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 0.7486880384584048
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: sld          Dunn Index Valule: 0.8233205335652861
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 4.683602504961463
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: avld         Dunn Index Valule: 2.2586806002025015
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: cent_ld      Dunn Index Valule: 2.1838338026300956
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 2.2170801594919096
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: sld          Dunn Index Valule: 0.6140262424157213
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 3.492995412900513
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: avld         Dunn Index Valule: 1.6845069510824404
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: cent_ld      Dunn Index Valule: 1.6286867741323774
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 1.6534816562537682
In [122]:
import random
import pandas as pd

# 빈 DataFrame 생성하기
Dunn_Index_result = pd.DataFrame(
    columns=['intra cluster', 'inter cluster', 'Dunn index'])
Dunn_Index_result
Out[122]:
intra cluster inter cluster Dunn index
In [123]:
len(inter_cluster_distance_type)
Out[123]:
5
In [124]:
for i in range(len(intra_cluster_distance_type)):
    for j in range(len(inter_cluster_distance_type)):
        Dunn_Index_result.loc[len(inter_cluster_distance_type)
                              * i+j, 'intra cluster'] = intra_cluster_distance_type[i]
        Dunn_Index_result.loc[len(inter_cluster_distance_type)
                              * i+j, 'inter cluster'] = inter_cluster_distance_type[j],
        Dunn_Index_result.loc[len(inter_cluster_distance_type)*i+j, 'Dunn index'] = Dunn_index(
            X, labels, intra_cluster_distance_type[i], inter_cluster_distance_type[j])

Dunn_Index_result
Out[124]:
intra cluster inter cluster Dunn index
0 cmpl_dd (sld,) 0.278028
1 cmpl_dd (cmpl_ld,) 1.58161
2 cmpl_dd (avld,) 0.762736
3 cmpl_dd (cent_ld,) 0.737461
4 cmpl_dd (av_cent_ld,) 0.748688
5 avdd (sld,) 0.823321
6 avdd (cmpl_ld,) 4.683603
7 avdd (avld,) 2.258681
8 avdd (cent_ld,) 2.183834
9 avdd (av_cent_ld,) 2.21708
10 cent_dd (sld,) 0.614026
11 cent_dd (cmpl_ld,) 3.492995
12 cent_dd (avld,) 1.684507
13 cent_dd (cent_ld,) 1.628687
14 cent_dd (av_cent_ld,) 1.653482
In [125]:
Dunn_Index_result[Dunn_Index_result['Dunn index']
                  == Dunn_Index_result['Dunn index'].max()]
Out[125]:
intra cluster inter cluster Dunn index
6 avdd (cmpl_ld,) 4.683603
In [ ]:
 

Brute Force - Dunn Index¶

In [126]:
X = data_iris_qaoa
labels = np.array(xbest_brute)

for i in range(len(intra_cluster_distance_type)):
    for j in range(len(inter_cluster_distance_type)):
        print("Dunn Index:", "Intra Cluster Dist.:", '%-10s' % intra_cluster_distance_type[i],
              "Inter Cluster Dist.:", '%-12s' % inter_cluster_distance_type[j],
              "Dunn Index Valule:", Dunn_index(X, labels, intra_cluster_distance_type[i], inter_cluster_distance_type[j]))
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: sld          Dunn Index Valule: 1.4394867323822669
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 2.276942252104228
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: avld         Dunn Index Valule: 1.7859269188276758
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: cent_ld      Dunn Index Valule: 1.7540104411101725
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 1.7725659569627754
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: sld          Dunn Index Valule: 2.7497575705234842
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 4.349494201316283
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: avld         Dunn Index Valule: 3.411539651581841
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: cent_ld      Dunn Index Valule: 3.3505716869220294
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 3.3860170780068253
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: sld          Dunn Index Valule: 2.1864618020548283
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 3.4584877704788215
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: avld         Dunn Index Valule: 2.7126759152659012
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: cent_ld      Dunn Index Valule: 2.66419741399485
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 2.692381714493401
In [127]:
# 빈 DataFrame 생성하기
Dunn_Index_result_brute = pd.DataFrame(columns=['intra cluster', 'inter cluster', 'Dunn index'])
Dunn_Index_result_brute
Out[127]:
intra cluster inter cluster Dunn index
In [128]:
for i in range(len(intra_cluster_distance_type)):
    for j in range(len(inter_cluster_distance_type)):
        Dunn_Index_result_brute.loc[len(inter_cluster_distance_type)
                              * i+j, 'intra cluster'] = intra_cluster_distance_type[i]
        Dunn_Index_result_brute.loc[len(inter_cluster_distance_type)
                              * i+j, 'inter cluster'] = inter_cluster_distance_type[j]
        Dunn_Index_result_brute.loc[len(inter_cluster_distance_type)*i+j, 'Dunn index'] = Dunn_index(
            X, labels, intra_cluster_distance_type[i], inter_cluster_distance_type[j])

Dunn_Index_result_brute
Out[128]:
intra cluster inter cluster Dunn index
0 cmpl_dd sld 1.439487
1 cmpl_dd cmpl_ld 2.276942
2 cmpl_dd avld 1.785927
3 cmpl_dd cent_ld 1.75401
4 cmpl_dd av_cent_ld 1.772566
5 avdd sld 2.749758
6 avdd cmpl_ld 4.349494
7 avdd avld 3.41154
8 avdd cent_ld 3.350572
9 avdd av_cent_ld 3.386017
10 cent_dd sld 2.186462
11 cent_dd cmpl_ld 3.458488
12 cent_dd avld 2.712676
13 cent_dd cent_ld 2.664197
14 cent_dd av_cent_ld 2.692382
In [129]:
Dunn_Index_result_brute[Dunn_Index_result_brute['Dunn index']== Dunn_Index_result_brute['Dunn index'].max()]
Out[129]:
intra cluster inter cluster Dunn index
6 avdd cmpl_ld 4.349494

QAOA - Dunn Index¶

In [130]:
X = data_iris_qaoa
labels = np.array(xbest_qaoa)

for i in range(len(intra_cluster_distance_type)):
    for j in range(len(inter_cluster_distance_type)):
        print("Dunn Index:", "Intra Cluster Dist.:", '%-10s' % intra_cluster_distance_type[i],
              "Inter Cluster Dist.:", '%-12s' % inter_cluster_distance_type[j],
              "Dunn Index Valule:", Dunn_index(X, labels, intra_cluster_distance_type[i], inter_cluster_distance_type[j]))
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: sld          Dunn Index Valule: 0.17673143177130227
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 1.3834661161819817
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: avld         Dunn Index Valule: 0.8493813016706195
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: cent_ld      Dunn Index Valule: 0.7797385860760765
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 0.8060241534341677
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: sld          Dunn Index Valule: 0.23941543331153467
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 1.8741609025504387
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: avld         Dunn Index Valule: 1.1506441743160687
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: cent_ld      Dunn Index Valule: 1.0563002267570645
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 1.0919088926055964
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: sld          Dunn Index Valule: 0.19635769815265816
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 1.537101914034305
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: avld         Dunn Index Valule: 0.9437062529192594
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: cent_ld      Dunn Index Valule: 0.8663296188356259
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 0.8955342342757953
In [131]:
# 빈 DataFrame 생성하기
Dunn_Index_result_qaoa = pd.DataFrame(columns=['intra cluster', 'inter cluster', 'Dunn index'])
Dunn_Index_result_qaoa
Out[131]:
intra cluster inter cluster Dunn index
In [132]:
for i in range(len(intra_cluster_distance_type)):
    for j in range(len(inter_cluster_distance_type)):
        Dunn_Index_result_qaoa.loc[len(inter_cluster_distance_type)
                              * i+j, 'intra cluster'] = intra_cluster_distance_type[i]
        Dunn_Index_result_qaoa.loc[len(inter_cluster_distance_type)
                              * i+j, 'inter cluster'] = inter_cluster_distance_type[j]
        Dunn_Index_result_qaoa.loc[len(inter_cluster_distance_type)*i+j, 'Dunn index'] = Dunn_index(
            X, labels, intra_cluster_distance_type[i], inter_cluster_distance_type[j])

Dunn_Index_result_qaoa
Out[132]:
intra cluster inter cluster Dunn index
0 cmpl_dd sld 0.176731
1 cmpl_dd cmpl_ld 1.383466
2 cmpl_dd avld 0.849381
3 cmpl_dd cent_ld 0.779739
4 cmpl_dd av_cent_ld 0.806024
5 avdd sld 0.239415
6 avdd cmpl_ld 1.874161
7 avdd avld 1.150644
8 avdd cent_ld 1.0563
9 avdd av_cent_ld 1.091909
10 cent_dd sld 0.196358
11 cent_dd cmpl_ld 1.537102
12 cent_dd avld 0.943706
13 cent_dd cent_ld 0.86633
14 cent_dd av_cent_ld 0.895534
In [133]:
Dunn_Index_result_qaoa[Dunn_Index_result_qaoa['Dunn index']== Dunn_Index_result_qaoa['Dunn index'].max()]
Out[133]:
intra cluster inter cluster Dunn index
6 avdd cmpl_ld 1.874161

K-Means - Dunn Index¶

In [134]:
X = data_iris_qaoa
labels = y_kmeans

for i in range(len(intra_cluster_distance_type)):
    for j in range(len(inter_cluster_distance_type)):
        print("Dunn Index:", "Intra Cluster Dist.:", '%-10s' % intra_cluster_distance_type[i],
              "Inter Cluster Dist.:", '%-12s' % inter_cluster_distance_type[j],
              "Dunn Index Valule:", Dunn_index(X, labels, intra_cluster_distance_type[i], inter_cluster_distance_type[j]))
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: sld          Dunn Index Valule: 1.4394867323822669
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 2.276942252104228
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: avld         Dunn Index Valule: 1.7859269188276758
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: cent_ld      Dunn Index Valule: 1.7540104411101725
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 1.7725659569627754
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: sld          Dunn Index Valule: 2.7497575705234842
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 4.349494201316283
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: avld         Dunn Index Valule: 3.411539651581841
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: cent_ld      Dunn Index Valule: 3.3505716869220294
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 3.3860170780068253
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: sld          Dunn Index Valule: 2.1864618020548283
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 3.4584877704788215
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: avld         Dunn Index Valule: 2.7126759152659012
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: cent_ld      Dunn Index Valule: 2.66419741399485
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 2.692381714493401
In [135]:
# 빈 DataFrame 생성하기
Dunn_Index_result_kmeans = pd.DataFrame(columns=['intra cluster', 'inter cluster', 'Dunn index'])
Dunn_Index_result_kmeans
Out[135]:
intra cluster inter cluster Dunn index
In [136]:
for i in range(len(intra_cluster_distance_type)):
    for j in range(len(inter_cluster_distance_type)):
        Dunn_Index_result_kmeans.loc[len(inter_cluster_distance_type)
                              * i+j, 'intra cluster'] = intra_cluster_distance_type[i]
        Dunn_Index_result_kmeans.loc[len(inter_cluster_distance_type)
                              * i+j, 'inter cluster'] = inter_cluster_distance_type[j]
        Dunn_Index_result_kmeans.loc[len(inter_cluster_distance_type)*i+j, 'Dunn index'] = Dunn_index(
            X, labels, intra_cluster_distance_type[i], inter_cluster_distance_type[j])

Dunn_Index_result_kmeans
Out[136]:
intra cluster inter cluster Dunn index
0 cmpl_dd sld 1.439487
1 cmpl_dd cmpl_ld 2.276942
2 cmpl_dd avld 1.785927
3 cmpl_dd cent_ld 1.75401
4 cmpl_dd av_cent_ld 1.772566
5 avdd sld 2.749758
6 avdd cmpl_ld 4.349494
7 avdd avld 3.41154
8 avdd cent_ld 3.350572
9 avdd av_cent_ld 3.386017
10 cent_dd sld 2.186462
11 cent_dd cmpl_ld 3.458488
12 cent_dd avld 2.712676
13 cent_dd cent_ld 2.664197
14 cent_dd av_cent_ld 2.692382
In [137]:
Dunn_Index_result_kmeans[Dunn_Index_result_kmeans['Dunn index']== Dunn_Index_result_kmeans['Dunn index'].max()]
Out[137]:
intra cluster inter cluster Dunn index
6 avdd cmpl_ld 4.349494

True Label - Dunn Index¶

In [138]:
X = data_iris_qaoa
labels = data_iris_qaoa_label2

for i in range(len(intra_cluster_distance_type)):
    for j in range(len(inter_cluster_distance_type)):
        print("Dunn Index:", "Intra Cluster Dist.:", '%-10s' % intra_cluster_distance_type[i],
              "Inter Cluster Dist.:", '%-12s' % inter_cluster_distance_type[j],
              "Dunn Index Valule:", Dunn_index(X, labels, intra_cluster_distance_type[i], inter_cluster_distance_type[j]))
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: sld          Dunn Index Valule: 1.4394867323822669
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 2.276942252104228
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: avld         Dunn Index Valule: 1.7859269188276754
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: cent_ld      Dunn Index Valule: 1.7540104411101725
Dunn Index: Intra Cluster Dist.: cmpl_dd    Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 1.7725659569627754
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: sld          Dunn Index Valule: 2.7497575705234842
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 4.349494201316283
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: avld         Dunn Index Valule: 3.41153965158184
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: cent_ld      Dunn Index Valule: 3.3505716869220294
Dunn Index: Intra Cluster Dist.: avdd       Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 3.3860170780068253
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: sld          Dunn Index Valule: 2.1864618020548283
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: cmpl_ld      Dunn Index Valule: 3.4584877704788215
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: avld         Dunn Index Valule: 2.712675915265901
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: cent_ld      Dunn Index Valule: 2.66419741399485
Dunn Index: Intra Cluster Dist.: cent_dd    Inter Cluster Dist.: av_cent_ld   Dunn Index Valule: 2.692381714493401
In [139]:
# 빈 DataFrame 생성하기
Dunn_Index_result_truelabel = pd.DataFrame(columns=['intra cluster', 'inter cluster', 'Dunn index'])
Dunn_Index_result_truelabel
Out[139]:
intra cluster inter cluster Dunn index
In [140]:
for i in range(len(intra_cluster_distance_type)):
    for j in range(len(inter_cluster_distance_type)):
        Dunn_Index_result_truelabel.loc[len(inter_cluster_distance_type)
                              * i+j, 'intra cluster'] = intra_cluster_distance_type[i]
        Dunn_Index_result_truelabel.loc[len(inter_cluster_distance_type)
                              * i+j, 'inter cluster'] = inter_cluster_distance_type[j]
        Dunn_Index_result_truelabel.loc[len(inter_cluster_distance_type)*i+j, 'Dunn index'] = Dunn_index(
            X, labels, intra_cluster_distance_type[i], inter_cluster_distance_type[j])

Dunn_Index_result_truelabel
Out[140]:
intra cluster inter cluster Dunn index
0 cmpl_dd sld 1.439487
1 cmpl_dd cmpl_ld 2.276942
2 cmpl_dd avld 1.785927
3 cmpl_dd cent_ld 1.75401
4 cmpl_dd av_cent_ld 1.772566
5 avdd sld 2.749758
6 avdd cmpl_ld 4.349494
7 avdd avld 3.41154
8 avdd cent_ld 3.350572
9 avdd av_cent_ld 3.386017
10 cent_dd sld 2.186462
11 cent_dd cmpl_ld 3.458488
12 cent_dd avld 2.712676
13 cent_dd cent_ld 2.664197
14 cent_dd av_cent_ld 2.692382
In [141]:
Dunn_Index_result_truelabel[Dunn_Index_result_truelabel['Dunn index']== Dunn_Index_result_truelabel['Dunn index'].max()]
Out[141]:
intra cluster inter cluster Dunn index
6 avdd cmpl_ld 4.349494
In [142]:
Dunn_Index_results_combined = Dunn_Index_result_brute[['intra cluster', 'inter cluster']]
Dunn_Index_results_combined['Dunn index - Brute'] = Dunn_Index_result_brute['Dunn index']
Dunn_Index_results_combined['Dunn index - QAOA'] = Dunn_Index_result_qaoa['Dunn index']
Dunn_Index_results_combined['Dunn index - K-Means'] = Dunn_Index_result_kmeans['Dunn index']
Dunn_Index_results_combined['Dunn index - True label'] = Dunn_Index_result_truelabel['Dunn index']

Dunn_Index_results_combined
Out[142]:
intra cluster inter cluster Dunn index - Brute Dunn index - QAOA Dunn index - K-Means Dunn index - True label
0 cmpl_dd sld 1.439487 0.176731 1.439487 1.439487
1 cmpl_dd cmpl_ld 2.276942 1.383466 2.276942 2.276942
2 cmpl_dd avld 1.785927 0.849381 1.785927 1.785927
3 cmpl_dd cent_ld 1.75401 0.779739 1.75401 1.75401
4 cmpl_dd av_cent_ld 1.772566 0.806024 1.772566 1.772566
5 avdd sld 2.749758 0.239415 2.749758 2.749758
6 avdd cmpl_ld 4.349494 1.874161 4.349494 4.349494
7 avdd avld 3.41154 1.150644 3.41154 3.41154
8 avdd cent_ld 3.350572 1.0563 3.350572 3.350572
9 avdd av_cent_ld 3.386017 1.091909 3.386017 3.386017
10 cent_dd sld 2.186462 0.196358 2.186462 2.186462
11 cent_dd cmpl_ld 3.458488 1.537102 3.458488 3.458488
12 cent_dd avld 2.712676 0.943706 2.712676 2.712676
13 cent_dd cent_ld 2.664197 0.86633 2.664197 2.664197
14 cent_dd av_cent_ld 2.692382 0.895534 2.692382 2.692382
In [ ]:
 
In [ ]: